Originally Posted by
whiteflags
The *scanf approach is actually not too much different from the tokenizing approach, if you're able to see *scanf as a tokenizer itself. The scanf functions are essentially tokenizing chunks of text delimited by white space. You essentially need space for the biggest word possible, and then space for the array of words. Then call *scanf consecutively. As the function reads a new word, copy it into the array of words.
C99, whiteflags idea is very easy, and works well - just a slight wrinkle to remove the punctuation from the end of the words.
For example:
Code:
#include <stdio.h>
#include <string.h>
#include <ctype.h>
#define SIZE 40
int main(void) {
int i,len;
char *ch;
char words[SIZE][15]={0};
FILE *fp=fopen("Edelweiss.txt", "r");
if(!fp) {
printf("Error opening file!\n");
return 1;
}
i=0;
while(fscanf(fp,"%s",words[i])!= EOF) {
len=strlen(words[i]); //handles comma's, periods, etc.
ch=&words[i][len-1];
if(!isalpha(*ch)) {
*ch='\0';
}
printf("%2d: %s\n",i+1,words[i]);
++i;
if(i>SIZE-1) break; //not necessary, but I like it
}
fclose(fp);
return 0;
}
//Output:
1: Edelweiss's //just to check on apostrophe's being included
2: edelweiss
3: every
4: morning
5: you
6: greet
7: me
8: Small
9: and
10: white
11: clean
12: and
13: bright
14: you
15: look
16: happy
17: to
18: meet
19: me
20: Blossom
21: of's
22: snow
23: may
24: you
25: bloom
26: and
27: grow
28: bloom
29: and
30: grow
31: forever
32: Edelweiss
33: edelweiss
34: bless
35: my
36: homeland
37: forever