Correctly phrased, experimental data yielding a P value of .05 means that there is only a 5 percent chance of obtaining the observed (or more extreme) result if no real effect exists (that is, if the no-difference hypothesis is correct). But many explanations mangle the subtleties in that definition.
The article takes a valid concern and presents it sensationally / in an exaggerated fashion. However, even when scientists understand p values, that does not mean the research and the statistics behind it are being reported to the public conscientiously and correctly. I would bet most lay people do not know or understand p values, or the difference between “normal” and statistical significance. The article is still a valuable explanation to the general public of how p-values and statistics can be used both constructively in science and deceptively in propaganda.