The Coming Software Apocalypse – The Atlantic

Source: The Coming Software Apocalypse – The Atlantic

The simultaneous failure of so many software systems smelled at first of a coordinated cyberattack. Almost more frightening was the realization, late in the day, that it was just a coincidence.

software doesn’t break. Intrado’s faulty threshold is not like the faulty rivet that leads to the crash of an airliner. The software did exactly what it was told to do. In fact it did it perfectly. The reason it failed is that it was told to do the wrong thing. Software failures are failures of understanding, and of imagination.

“The problem,” Leveson wrote in a book, “is that we are attempting to build systems that are beyond our ability to intellectually manage.” … This is the trouble with making things out of code, as opposed to something physical. “The complexity,” as Leveson puts it, “is invisible to the eye.” … Software has enabled us to make the most intricate machines that have ever existed. And yet we have hardly noticed, because all of that complexity is packed into tiny silicon chips as millions and millions of lines of code. But just because we can’t see the complexity doesn’t mean that it has gone away. … Gerard Berry said in his talk. “When your tires are flat, you look at your tires, they are flat. When your software is broken, you look at your software, you see nothing. So that’s a big problem.”

software becomes unruly because the media for describing what software should do—conversations, prose descriptions, drawings on a sheet of paper—are too different from the media describing what software does do, namely, code itself. Too much is lost going from one to the other. The idea behind model-based design is to close the gap. The very same model is used both by system designers to express what they want and by the computer to automatically generate code.

“In the 15th century, people used to build cathedrals without knowing calculus, and nowadays I don’t think you’d allow anyone to build a cathedral without knowing calculus. And I would hope that after some suitably long period of time, people won’t be allowed to write programs if they don’t understand these simple things.” — Leslie Lamport

Can American soil be brought back to life?

Source: Can American soil be brought back to life?

A new idea: If we revive the tiny creatures that make dirt healthy, we can bring back the great American topsoil. But farming culture — and government — aren’t making it easy.

A clump of soil from a heavily tilled and cropped field was dropped into a wire mesh basket at the top of a glass cylinder filled with water. At the same time, a clump of soil from a pasture that grew a variety of plants and grasses and hadn’t been disturbed for years was dropped into another wire mesh basket in an identical glass cylinder. The tilled soil–similar to the dry, brown soil on Cobb’s farm—dissolved in water like dust. The soil from the pasture stayed together in a clump, keeping its structure and soaking up the water like a sponge. Cobb realized he wasn’t just seeing an agricultural scientist show off a chunk of soil: He was seeing a potential new philosophy of farming.

Promoting soil health comes down to three basic practices: Make sure the soil is covered with plants at all times, diversify what it grows and don’t disrupt it.

Reconstruction of a Train Wreck: How Priming Research Went off the Rails | Replicability-Index

Source: Reconstruction of a Train Wreck: How Priming Research Went off the Rails | Replicability-Index
Response Comment by Daniel Kahneman:

What the blog gets absolutely right is that I placed too much faith in underpowered studies. As pointed out in the blog, and earlier by Andrew Gelman, there is a special irony in my mistake because the first paper that Amos Tversky and I published was about the belief in the “law of small numbers,” which allows researchers to trust the results of underpowered studies with unreasonably small samples. We also cited Overall (1969) for showing “that the prevalence of studies deficient in statistical power is not only wasteful but actually pernicious: it results in a large proportion of invalid rejections of the null hypothesis among published results.” Our article was written in 1969 and published in 1971, but I failed to internalize its message.

My position when I wrote “Thinking, Fast and Slow” was that if a large body of evidence published in reputable journals supports an initially implausible conclusion, then scientific norms require us to believe that conclusion. Implausibility is not sufficient to justify disbelief, and belief in well-supported scientific conclusions is not optional. This position still seems reasonable to me – it is why I think people should believe in climate change. But the argument only holds when all relevant results are published.

How Scared Should You Be of Macaroni and Cheese? – The Atlantic

Source: How Scared Should You Be of Macaroni and Cheese? – The Atlantic

A reason to minimize highly processed foods, but not to panic

this was an act of fact-based advocacy, as opposed to science, a distinction worth considering

An analysis conducted with the express purpose of justifying a cause means bias, which is evident in the reporting of the results, which omit practical analysis of the levels of phthalates in the cheeses. And yet the choice was made to analyze and warn against macaroni and cheese—a product that would resonate with pregnant people and parents with young children. This was a scare-based publicity move undertaken with apparently noble intentions, to raise awareness for what the advocacy group deems to be a dire cause. It worked. It also caused undue concern and regret.

If I could end this answer with a question to you, it would be, do you think this sort of approach is justifiable? Is this kind of stunt a necessary means to call attention to an issue that has gone largely ignored for decades? Or does it do more harm by undermining the idea of science and the public’s trust in the process, if readers start to assume that studies are simply means of gathering data to justify a pre-existing agenda?

The team that took us to Pluto briefly spotted their next target at the edge of the Solar System – The Verge

Source: The team that took us to Pluto briefly spotted their next target at the edge of the Solar System – The Verge

The object in question is called 2014 MU69, and it’s thought to be an incredibly old space rock that’s remained relatively unchanged since the Solar System first formed 4.6 billion years ago. But tracking 2014 MU69 has been pretty tough. It’s only about 30 miles wide, and it orbits over 4 billion miles from Earth. … Using the Hubble data, along with precise star positions measured by Europe’s Gaia satellite, the team predicted various times when 2014 MU69 might pass directly in front of a star. … However, the first two times the scientists tried to see the occultation, they didn’t see the object’s shadow. The first attempt was on June 3rd, with two separate teams looking in Argentina and South Africa, and the scientists tried again on July 10th with NASA’s SOFIA airplane — a flying observatory — as it flew over the Pacific Ocean. It wasn’t until this weekend, just before midnight Eastern Time on Sunday, that the mission team finally caught the occultation while huddled around telescopes in Chubut and Santa Cruz, Argentina.

 

This is why science is amazing. It is not always correct. It has to be updated constantly with new information in order to perform even the most trivially different task (e.g. track a new star or a different space rock). But the cumulative knowledge gained thereby let’s us do incredible things, like predict when an object only about 30 miles wide and more than 4 billion miles away will pass between a particular place on Earth and a star that is light years away correctly enough to put a telescope at that place and watch it happen.