Thread: C Programming an image processing tool.

  1. #1
    Registered User
    Join Date
    Dec 2015

    C Programming an image processing tool.

    I wanted to share this because I thought it was worth telling people about. I have written new Algorithm that is based on the Convolutional Neural Networks.

    Its not finished yet. But I have here included the first stage in a set of simple functions for preparing the image for input to the Neural Network. I can also provide a code snippet for the NN which I think is one of my best. I have been writing C/C++ coded algorithms for a while and find it very enjoyable. Here is the result of a simple convolution mask. It uses the Laplacian Gaussian double differential which sounds like a mouthful but basicly it filters the image into Zero-crossings. These Zero crossings then provide the rest of the algorithm which will first Cluster these Zero crossings and then classify them as features using the neural network. Its exiting stuff and I just wanted to share my enthusiam for this topic which I know is very popular for people interested in AI for example charactor recognition etc.. Also it is helpful to know that you can do this easily in C because it means you dont have to rely on a photo editing package that doesnt always deliver the reults you want!

    So here is a snippet from my code.

    Part 1 - A function designed to apply a convolutional filter to an image array in C programming (Easy)
    void laplac_edge_filter(double gauss, int *image, int size){
    int yl=0,xl=0;
    int pos,fpos[3][3],fcon[3][3];  //using filter size 3x3 
    //change this with different values for gauss from equation also size of convolution matrix
    fcon[0][0] = 0;   fcon[1][0] = -1; fcon[2][0] = 0;
    fcon[0][1] = -1;  fcon[1][1] = 8;  fcon[2][1] = -1;
    fcon[0][2] = 0;   fcon[1][2] = -1; fcon[2][2] = 0;
    for(int y=0;y<sqrt(size);y++){
    for(int x=0;x<sqrt(size);x++){
    pos  = (y-1) * sqrt(size) +x;
    fpos[0][0] = (y-2) * sqrt(size) + x-1;
    fpos[0][1] = (y-2) * sqrt(size) + x;
    fpos[0][2] = (y-2) * sqrt(size) + x+1;
    fpos[1][0] = (y-1) * sqrt(size) + x-1;
    fpos[1][1] = pos;
    fpos[1][2] = (y-1) * sqrt(size) + x+1;
    fpos[2][0] = y * sqrt(size) + x-1;
    fpos[2][1] = y * sqrt(size) + x;
    fpos[2][2] = y * sqrt(size) + x+1;
    for(int i=0;i<3;i++){
    for(int j=0;j<3;j++){
    image[fpos[i][j]] *= fcon[i][j]; }
    Ofcourse this function is quite easy and straightforward the difficult thing will be later on I will try to ammend this function to derive the convolutional mask or array that is aplied to the image.

    Part 2 - I thought this might be helpful as it shows an easy way to read and write images saved as pgm ascii files - so take a photo convert to greyscale if you wish and then save as ascii pgm and this will easily read into an array. Its basic as it gets a good primer to Image processing in C though.

    void load_image(int * size,const char *fname,int *array){
    char c,*l;
    int lines=0;
    int count=0,i,x;
    FILE *fh;
    //*array=new int[size*size];
    fh = fopen(fname, "r");
    if (fh == NULL) {
    printf("Unable to open file %s\n",fname);
    while  ((c=fgetc(fh)) != EOF)   {
    if(c == '\n')lines++;
    *size = count;
    void save_image(int size,const char *fname, int *array){
    char c,*l;
    int count=0,count2=0,i,x;
    FILE *fout;
    fout = fopen(fname, "w");
    if (fout == NULL) {
    printf("Unable to open file %s\n",fname);
    exit(1);         }
    fprintf(fout,"P2\n# CREATOR: Edge Detect 0.1\n100 100\n");
    for(int i=0;i<size;i++){

    Part 3 - Now when you go hunting for Zero-crossings in an image that have value indicating a feature it would help if you could aswell as filtering the image you can threshold it into a set of binary intensity levels 1,0 . Thresholding is also very easy:

    for(int i=0;i<size;i++){
    Part 4 - Here is just a few of the functions a sneak preview of the final Algo the main engine that will do the work at classifiying the Images features after the clustering algorithm has done its work.

    A. First start by laying out the Class structure of the Neural Net Algo:

    class neuron{
    double *dG;   //DeltaG
    double *Wt;  //Weight Matrix
    double O;    //Theta - Bias for ith unit
    double s;    //Activation of ith Unit
    double p1,p2;    //probability of node relative to phase
    void init(int nodes){
    //number of nodes in previous layer memory for weights and delta's
    dG= new double[nodes];
    Wt= new double[nodes];
    class Nets_and_Boltz{
    int numlayers;
    int *numnodes;   //topology of net
    int numinputs;
    double *inputs; //input array use for normalization of data prior to input
    //;visible inout units
    neuron *vnode;
    //;hidden units
    neuron **hnode;
    double R;    //lrate
    double T;    //Temp
    double K;    //Boltzman Constant
    int **comb_array; //neural assembly array stores which neurons are in each assembly
    B. Then after you initialise all of these structures and arrays ie give them memory we can then write an update function:

    double update_net(double *in, int phase){
    double deltaE,gE=0;
    double prob;
    //;update feedforwrd Si = Sum WijSi - 0i (1,0)
    //; 0i = threshold for i
    //; Si = activation for ith unit
    //; Wij = weight between unit Si and Sj
    //; use weight matrix?
    //; Delta Ei = Sum WijSj + 0i
    //; Prob i=on = 1/1+ exp(-Delta Ei/T) logistic function
    //; Si = 1 if PseudoRand >= Prob i else Si = 0
    for(int i=0;i<numnodes[0];i++){
    vnode[i].s = in[i];   // 0 or 1
    vnode[i].s = (rand()%2)-1;        //random -ve phase        
            }              }
    for(int i=1;i<numlayers;i++){
    for(int j=0;j<numnodes[i];j++){
    for(int k=0;k<numnodes[i-1];k++){
    deltaE += hnode[i][j].Wt[k]*vnode[k].s + hnode[i][j].O; 
    gE += hnode[i][j].Wt[k]*vnode[k].s*hnode[i][j].s + vnode[k].O;             //calculate global energy of machine
    deltaE += hnode[i][j].Wt[k]*hnode[i-1][k].s + hnode[i][j].O; 
    gE += hnode[i][j].Wt[k]*hnode[i-1][k].s*hnode[i][j].s + hnode[i-1][k].O;   //calculate global energy of machine
    prob   = 1/(1+exp(deltaE/T));
    if(prob*100 >= (rand()%100+1)){
    hnode[i][j].s  = 1;}else{
    hnode[i][j].s  = 0;
    hnode[i][j].p1 = prob;}else{
    hnode[i][j].p2 = prob;                            //calc Pion for different phases
    return gE;
    This is the Update function and Class structure for a Stochastic neural network which uses a Temperature varied function to change the state from a high energy to a low energy. Once in a low energy state gradualy reached through a process called Simulated Annealing a classification of the image's features is then reached which yeilds the most information.

    A process similar to Simulated Annealing is now being used by the very first Quantum Computers or Q-bits that use Quantum Annealing to solve complex problems in Machine learning.

    A good example is finding the correct solution to a non-linear equation f(x) = y^2 or the famous X-or problem both are non-linear as is the classification of features in an image. A neural network is useful at solving these non-linear problems. But it helps to prepare the way for this solutions by using linear problem solvers or heuristics like the image filter or laplacian edge detector to normalise the image so that the neural network has less chance of making a mistaken classification.

    For reading about my favourite Science Fiction or New Fiction Cyberpunk please checkout my Website at Spacefarm dot org dot UK and read jUice dot extramindcorp dot com

    If you liked this post be sure to read the next oine entitled:

    C Programming a Combinatorial Problem Solver

  2. #2
    Registered User
    Join Date
    Dec 2015
    Well I dont know if anyone noticed this but the Update function is actually incorrect well depending on whether you are using a feedforward network. Nobody noticed this - if you are using a Boltzman machine that employ's simulated annealing then the update function should connect all neurons symetrically this one doesnt it updates using the logistic function - derived from the Boltzman neural network from calculating the probability 1/(1+exp(deltaE/T) - but it does so layer by layer. If it remains feedforward - layer by layer then the Training function must be Backpropogation which I will also implement, however if it is to learn using Gradient descent via the Kullbeck Leibler divergence then we use simulated annealing to arrive at P+ and P- these are the probabilitys that any neuron is on in a state of Thermal Equilibrium reached through Annealing with and without Inputs and Outputs. I will create another 2 Posts one detailing the sucess of this Learning Function and another showing how the same network can learn using Backpropagation.for a Feedforward design. Using a Feed forward design is analgous to pruning the weights to behave Asymetrically.

    The Kullbeck Leibler Divergence is a measure of how similar the two distributions P+(Input and Output) and P-(Random) are reached through gradually reducing the Temperature until Equilibrium is reached in either state, gradient descent is used to change the weights in order to reduce this difference and hence the error in the neural network.

Popular pages Recent additions subscribe to a feed

Similar Threads

  1. Replies: 1
    Last Post: 03-05-2012, 12:55 PM
  2. image processing
    By tricolaire in forum C++ Programming
    Replies: 2
    Last Post: 09-21-2006, 06:44 PM
  3. C++ image processing
    By pppbigppp in forum C++ Programming
    Replies: 2
    Last Post: 06-08-2006, 04:36 AM
  4. Replies: 1
    Last Post: 09-11-2004, 01:16 AM
  5. Image rotation using intel image processing
    By sunis in forum Windows Programming
    Replies: 1
    Last Post: 11-18-2002, 02:40 AM