Code:
    //open files for reading and writing
    FILE *out = fopen("sim_output.txt", "w");
    FILE *in_t = fopen("sim_input.txt", "r");
    if (in == NULL && out == NULL) //Check if files exists before we can use them
    {
        perror("Input file empty\n");
        exit(0);
    }
    //Create a shared memory for the file streams
    //it is important the algorithm keeps a consistent stream of values
    share_t *share[4];
    for (int i = 0; i < 4; i++)//Load files into share for transporting.
    {
        share[i] = mmap(NULL, sizeof * share, PROT_READ | PROT_WRITE,
            MAP_SHARED | MAP_ANONYMOUS, -1, 0);
        share[i]->in = in_t;
        share[i]->out = out;
in == NULL isn't testing your opened file, just some random global variable also called in.

That share->in and share-out isn't going to work. The FILE* pointer might be in shared memory, but other things internal to it (like say a buffer) won't be in shared memory, and thus will not persist properly across a fork (you don't go onto call execl).

fork(2): create child process - Linux man page
The fork() environment is a weird place.
99% of the time, people just use it as a jumping off point for calling execl.
It's not normally a place where you do substantial work.


Back to files:
In main, do
int in_fd = open("sim_input.txt", O_RDONLY);
int out_fd = open("sim_output.txt", O_WRONLY|O_CREAT|O_TRUNC, 0660);

Then share these descriptors.

In each function, you can get back to a FILE* with dfopen, such as
FILE *in_f = fdopen(share->in_fd, "r");