What is the Last Question? – The EDGE

Source: What is the Last Question? – The EDGE Question–2018 | Edge.org, by John Brockman, Editor, Edge

For the 50th anniversary of “The World Question Center,” and for the finale to the twenty years of Edge Questions, I turned it over to the Edgies:

“Ask ‘The Last Question,’ your last question, the question for which you will be remembered.”

Many pages of excellent questions from some very bright people.

Solving Minesweeper, by Magnus Hoff

Source: Solving Minesweeper and making it better, by Magnus Hoff (2015)

By implementing a full solver for Minesweeper, we were able to develop a variant of the game that gets rid of the bane of Minesweeper; when you risk losing the game randomly after you have invested time and thought into solving almost all of the board. This version is different from the original only in the situations that would require random guessing, so I would suggest that this version is strictly more fun than the original game.

Automating Minesweeper into a game of chance, then backing up a step and removing chance while leaving multi-square logic unsolved.

Optimization over Explanation – Berkman Klein Center Collection – Medium

Source: Optimization over Explanation – Berkman Klein Center Collection – Medium, by David Weinberger

Govern the optimizations. Patrol the results.

Human-constructed models aim at reducing the variables to a set small enough for our intellects to understand. Machine learning models can construct models that work — for example, they accurately predict the probability of medical conditions — but that cannot be reduced enough for humans to understand or to explain them.

AI systems ought to be required to declare what they are optimized for.

we don’t have to treat AI as a new form of life that somehow escapes human moral questions. We can treat it as what it is: a tool that should be measured by how much better it is at doing something compared to our old way of doing it: Does it save more lives? Does it improve the environment? Does it give us more leisure? Does it create jobs? Does it make us more social, responsible, caring? Does it accomplish these goals while supporting crucial social values such as fairness?

By treating the governance of AI as a question of optimizations, we can focus the necessary argument about them on what truly matters: What is it that we as a society want from a system, and what are we willing to give up to get it?

‘Red Oceans’: How to Find Profitable Startup Ideas- Capital & Growth, by CSSheppard

Source: ‘Red Oceans’: How to Find Profitable Startup Ideas – Capital & Growth

Some entrepreneurs start businesses based upon their hobbies or avocational interests. They turned their passion for a particular activity or subject into a business venture. … Often these businesses are not started to reap considerable profits, but instead to pursue a lifestyle that brings joy and personal satisfaction to the entrepreneur.

But if the market for what you like, what interests you or where you live is too small or diffused to support your business idea, then creating that business is a fool’s errand (unless you have money to burn).

The External Approach (or the “Outside-In approach”) to discovering viable business ideas looks first to the external market (verses the skills, knowledge and tastes of the entrepreneur) and tries to methodically discover market gaps that already exist.

  • Observing the Market
  • Focus Groups
  • Reverse Brainstorming
  • Market Growth
  • Matrix Charting for Insights
  • The “Slice of Life” Approach
  • The Market-Area Saturation Approach
  • The Competitive Matrix Approach

Conflict Vs. Mistake | Slate Star Codex

Source: Conflict Vs. Mistake | Slate Star Codex, by Scott Alexander

Person A and Person B disagree. Why do they disagree?

Do they want the same thing, but one or both people are making a mistake in reasoning due to a lack of information or understanding?

Do they want different, incompatible things in conflict with one another and at most one of them can get what they want?

Can they even agree on what they disagree about (goal or solution/process), or is one or both of them convinced that the other is being deceitful in their arguments and reasoning?

 

From the comments:

Mistake theories are best suited to the task of avoiding negative-sum conflicts. Conflict theories are best suited to the task of winning zero-sum conflicts.

having power is more important than convincing people that your ideas are correct

http://ncase.me/trust// (a web game about trust)