C - mpi programming

This is a discussion on C - mpi programming within the C Programming forums, part of the General Programming Boards category; Hi, i am trying to implement a program using (open) mpi that sends groups of numbers to each process which ...

  1. #1
    Registered User
    Join Date
    Feb 2009
    Posts
    6

    C - mpi programming

    Hi,

    i am trying to implement a program using (open) mpi that sends groups of numbers to each process which calculate the sum and return it to the master which in turn calculates to the total sum.
    i am new to mpi and c programming.

    can anyone tell me what is wrong in my code please.

    Code:
    #include <stdio.h>
    #include "mpi.h"
    
    main(int argc, char ** argv)
    {
    	int my_rank;
    	int source, dest;
    	int tag=1234;
    	int namelen;
    	char msg[100];
    	char processor_name[MPI_MAX_PROCESSOR_NAME];
    	
    	
    
    	int startInt = 1;
    	int endInt = 1000;
    
    	int nb = (endInt-startInt)+1;
      	int numbers[nbInt];
    	int test[nbInt];
      
      	int i, j, k, n, s;
    	int part_sum;
    	
    	MPI_Status status;
    	MPI_Init(&argc, &argv);
    	MPI_Comm_size(MPI_COMM_WORLD, &processes);
    	MPI_Comm_rank(MPI_COMM_WORLD, &my_rank);
    	MPI_Get_processor_name(processor_name,&namelen);
    	
      	for(i=0, k=startInt; i<nb; i++, k++)
      	{
    		numbers[i] = k;
      	}
    	    	
    	if(my_rank != 0)
    	{ 
    		
    		MPI_Recv(&numbers[n], 100, MPI_INTEGER, source, tag, MPI_COMM_WORLD, &status);
    		part_sum = 0;
    		
    		dest = 0;  
    		for(i=0; i<s; i++)
    		{
    			part_sum = part_sum + numbers[i];
    		}
    		MPI_Send(&part_sum, 1, MPI_INTEGER, dest, tag, MPI_COMM_WORLD);
    	}
    	else
    	{ 		
    		
    		s = nb/processes; 
    		  
    		
    		
    		for(j=0, n=0; j<processes; j++, n=n+s)
    		{
    				MPI_Send(&numbers[n], s, MPI_INTEGER, j, tag, MPI_COMM_WORLD);	
    		}
    
    		int total_sum = 0;
    
    		for(i=0; i<processes; i++)
    		{
    			MPI_Recv(&part_sum, 100, MPI_INTEGER, i, tag, MPI_COMM_WORLD, &status);
    			total_sum += part_sum;	
    		}
    		
    		printf("%i\n",total_sum);
    	}
    	MPI_Finalize();
    	return 0;
    }
    thanks in advance.

  2. #2
    Kernel hacker
    Join Date
    Jul 2007
    Location
    Farncombe, Surrey, England
    Posts
    15,677
    It would help if you told us what happens in relation to what you expect to see.

    --
    Mats
    Compilers can produce warnings - make the compiler programmers happy: Use them!
    Please don't PM me for help - and no, I don't do help over instant messengers.

  3. #3
    Registered User
    Join Date
    Feb 2009
    Posts
    6
    Basically, let's say if I have 5 "slaves" and numbers from 1-20.
    The master will send 5 chunks to each slave.
    E.g. the first slave will get the numbers 1,2,3,4, the second slave the numbers 5,6,7,8 ......
    Slave one will calculate 1+2+3+4, slave two will calculate 5+6+7+8 and all the slave processes return their addition results to the master which the sums up all the partial sums and prints them on the standard output (screen).

    if(my_rank != 0) block contains the code for the slave and the else block for the master.

  4. #4
    Kernel hacker
    Join Date
    Jul 2007
    Location
    Farncombe, Surrey, England
    Posts
    15,677
    Yes, I understand the concept of distributed computing. But to be able to help you, we'd need to understand what part of your code is or isn't working. Are you getting the result you expect?

    --
    Mats
    Compilers can produce warnings - make the compiler programmers happy: Use them!
    Please don't PM me for help - and no, I don't do help over instant messengers.

  5. #5
    Registered User
    Join Date
    Feb 2009
    Posts
    6
    when i run the code i get errors with the recv functions. I am not sure if i am using them in the correct way.

    the function header for the methods are:

    Code:
    int MPI Send(void *buf, int count, MPI Datatype type, int dest, int tag,
    MPI Comm comm)
    int MPI Recv(void *buf, int count, MPI Datatype type, int src, int tag, MPI Comm
    comm, MPI Status *status)

  6. #6
    Kernel hacker
    Join Date
    Jul 2007
    Location
    Farncombe, Surrey, England
    Posts
    15,677
    Getting more and more tired of the "twenty questions"...

    And do you think it would help us help you if you told us WHAT those errors are?

    --
    Mats
    Compilers can produce warnings - make the compiler programmers happy: Use them!
    Please don't PM me for help - and no, I don't do help over instant messengers.

  7. #7
    Registered User
    Join Date
    Feb 2009
    Posts
    6
    Sorry about that.
    Those are the error message that I get when I try to run the code:

    *** An error occurred in MPI_Recv
    *** on communicator MPI_COMM_WORLD
    *** MPI_ERR_TRUNCATE: message truncated
    *** MPI_ERRORS_ARE_FATAL (goodbye)

  8. #8
    Kernel hacker
    Join Date
    Jul 2007
    Location
    Farncombe, Surrey, England
    Posts
    15,677
    So your receive process expects to receive 100, and you are sending s - is s 100?

    --
    Mats
    Compilers can produce warnings - make the compiler programmers happy: Use them!
    Please don't PM me for help - and no, I don't do help over instant messengers.

  9. #9
    Registered User
    Join Date
    Feb 2009
    Posts
    1
    Quote Originally Posted by matsp View Post
    So your receive process expects to receive 100, and you are sending s - is s 100?
    This is exactly the problem. If you run with <10 processes, it looks like you're sending more than 100 int's to each process, and the message gets truncated at the receiver. MPI defines that this is an error. It would likely be better to MPI_Recv the right number of elements, since even the compute processes can perform the same s=... computation as MPI_COMM_WORLD rank 0 (because they all got the same value for "processes").

    Also, shouldn't s = nb/(processes - 1) since you're not sending to MCW rank 0?

    Hope that helps (yay google alerts for finding this question for me ;-) ).

Popular pages Recent additions subscribe to a feed

Similar Threads

  1. Communication using MPI
    By Cell in forum Linux Programming
    Replies: 9
    Last Post: 08-13-2009, 02:28 AM
  2. MPI in C
    By ltee in forum C Programming
    Replies: 5
    Last Post: 03-26-2009, 06:10 AM
  3. Sorting whit MPI
    By isato in forum C Programming
    Replies: 0
    Last Post: 03-03-2009, 09:38 AM
  4. Malloc and MPI
    By moddinati in forum C Programming
    Replies: 17
    Last Post: 03-07-2008, 06:55 PM
  5. MPI programming
    By kris.c in forum Tech Board
    Replies: 1
    Last Post: 12-08-2006, 11:25 AM

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21