I disagree. Certainly, I see the advantages of that method, however there are several things I would like to mention here:
Firstly, as I said previously, x = malloc(n * sizeof(int)) improves the READABILITY of the code which is an undeniable qualitative aspect of the program. I think this is more important than writing code so as to prevent yourself from making mistakes that you should not be making in the first place. (i.e. your long-short example) Simply put, the quality of the code, which is also reflected by its readability cannot be sacrificed for the sake of reducing the time you spend debugging your program.
Secondly, a good programmer will spend a considerable amount of type designing the data structures and data types used in his program well before he starts typing code. Consequently, changes in data types should not be too frequent while developing your code. Data representation is a design decision first and foremost. If you design your program well, you will know before you type that x should be a long and not a short. I personally am more old school and believe in the principle of thinking before typing code, rather than jumping on the keyboard and figuring things out on the way.
Thirdly, the importance of malloc is that it allocates memory, however ironically, using:
x=malloc(n * sizeof *x) gives you absolutely NO IDEA about how much memory you are allocating. All you know is that you are allocating n slots for whatever type x is and that is simply insufficient.
Lastly, the R&K book uses this approach on malloc() and not the one with the pointer. This is not a reason per se, however I think it's important to learn from how the developers of the language use it.