Read all the values in normal order (top down), but only retain the last value.
It is only worth the added complexity to read the file in "reverse order", unless the file is known to be large (say hundreds of kilobytes or more) and the interesting entries are likely to be (repeated) near the end of the file. In that case one might read just the last couple of kilobytes of the file opportunistically, to see if the entry is listed there. Reading backwards is complicated, because you must overlap by a full entry. It's just not worth it in the typical cases.
Implementation-wise, you'll need a loop that reads each line at a time.
If you want just one key ("fedora"), you can check if the beginning of the line matches the key. If the key matches, read and store the value, otherwise ignore the line.
If you want all entries, but only the last value for each, I'd read the name and value, then store the name (and associated value) in a hash table. Instead of adding a new entry, the add-to-hash-table function would only update the value if the name is already in the hash table.
If you consider this awk snippet,
Code:
awk 'BEGIN { FS="[\t\v\f ]*=[\t\v\f ]*" } { data[$1]=$2 } END { for (k in data) printf "%s=%s\n", k, data[k] }' input-file
it actually does exactly what I described for all entries case; practically all awk variants use a hash table for associative arrays. If you run that, you'll also notice that the order in which the entries are listed, does not match the input; that's indicative of a hash table, too. Given your input data, GNU awk 3.1.8 outputs
Code:
bsd=14
fedora=62
mint=199
centos=9
ubuntu=56
If you are not familiar with hash tables, they're not complicated to implement -- at least if you know basic dynamic memory management (malloc(), realloc(), free()); I think this would be a perfect exercise for hash tables.