I run the latest intel Parallel Studio XE Cluster Edition with MPI Library 2017 Release Update 2 on top of MS Visual Studio Enterprise 2015.
I am trying to debug a program to run in an MPI environment and followed what is listed at software.intel.com/en-us/node/610381.
I also ensured that the suggestions made in the past (in a previous topic I created) are correct in the project properties and I ran mpivars.bat (from C:\Program Files (x86)\IntelSWTools\mpi\2017.2.187\intel64\bin) as instructed in the same topic. The topic link is software.intel.com/en-us/forums/intel-visual-fortran-compiler-for-windows/topic/712760:
- Under Properties set (for all Configurations) set:
- Fortran > General > Additional Include Directories to $(I_MPI_ROOT)\intel64\include
- Linker > General > Additional Library Directories: $(I_MPI_ROOT)\intel64\lib\release
- Linker > Input > Additional Dependencies to impi.lib
However... when I try to build, I have 20 errors similar to ERROR LNK 2001: unresolved external symbol _MPI_WIN_DUP_FN and to error LNK2019: unresolved external symbol _MPI_COMM_RANK referenced in function _MAIN__ - naturally, each with a different "external symbol". I only use the following MPI "commands" (these are ALL I use in different sections of the program):
call MPI_INIT( ierr )
call MPI_COMM_RANK( MPI_COMM_WORLD, myid, ierr )
call MPI_COMM_SIZE( MPI_COMM_WORLD, numprocs, ierr )
call MPI_BCAST(nrep,1,MPI_INTEGER,0,MPI_COMM_WORLD,ierr)
call MPI_BCAST(nvar,1,MPI_INTEGER,0,MPI_COMM_WORLD,ierr)
call MPI_BCAST(ind_var,17,MPI_DOUBLE_PRECISION,0,MPI_COMM_WORLD,ierr)
call MPI_BCAST(txe,1,MPI_DOUBLE_PRECISION,0,MPI_COMM_WORLD,ierr)
call MPI_BCAST(txc,1,MPI_DOUBLE_PRECISION,0,MPI_COMM_WORLD,ierr)
call MPI_BCAST(txm,1,MPI_DOUBLE_PRECISION,0,MPI_COMM_WORLD,ierr)
call MPI_BCAST(txr,1,MPI_DOUBLE_PRECISION,0,MPI_COMM_WORLD,ierr)
call MPI_BCAST(npop_real,1,MPI_INTEGER,0,MPI_COMM_WORLD,ierr)
call MPI_BCAST(i_criterio,1,MPI_INTEGER,0,MPI_COMM_WORLD,ierr)
call MPI_BCAST(nmaxgen,1,MPI_INTEGER,0,MPI_COMM_WORLD,ierr)
call MPI_GATHER(pop_loc,npop_loc,MPI_DOUBLE_PRECISION,pop,npop_loc,MPI_DOUBLE_PRECISION,0,MPI_COMM_WORLD,ierr)
call MPI_GATHER(apt_loc,npop_loc,MPI_DOUBLE_PRECISION,apt,npop_loc,MPI_DOUBLE_PRECISION,0,MPI_COMM_WORLD,ierr)
call MPI_GATHER(pen_loc,npop_loc,MPI_DOUBLE_PRECISION,pen,npop_loc,MPI_DOUBLE_PRECISION,0,MPI_COMM_WORLD,ierr)
call MPI_Barrier(MPI_COMM_WORLD)
call MPI_SCATTER(pop_red,npop_loc_red,MPI_DOUBLE_PRECISION,pop_loc,npop_loc_red,MPI_DOUBLE_PRECISION,0,MPI_COMM_WORLD,ierr)
call MPI_SCATTER(apt_red,npop_loc_red,MPI_DOUBLE_PRECISION,apt_loc,npop_loc_red,MPI_DOUBLE_PRECISION,0,MPI_COMM_WORLD,ierr)
call MPI_SCATTER(pen_red,npop_loc_red,MPI_DOUBLE_PRECISION,pen_loc,npop_loc_red,MPI_DOUBLE_PRECISION,0,MPI_COMM_WORLD,ierr)
call MPI_GATHER(pop_loc,npop_loc_red,MPI_DOUBLE_PRECISION,pop_red,npop_red,MPI_DOUBLE_PRECISION,0,MPI_COMM_WORLD,ierr)
call MPI_GATHER(apt_loc,npop_loc_red,MPI_DOUBLE_PRECISION,apt_red,npop_red,MPI_DOUBLE_PRECISION,0,MPI_COMM_WORLD,ierr)
call MPI_GATHER(pen_loc,npop_loc_red,MPI_DOUBLE_PRECISION,pen_red,npop_red,MPI_DOUBLE_PRECISION,0,MPI_COMM_WORLD,ierr)
Would anyone have any idea? Suggestions?
Many thanks,
Alex.