Source: Optimization over Explanation – Berkman Klein Center Collection – Medium, by David Weinberger
Govern the optimizations. Patrol the results.
Human-constructed models aim at reducing the variables to a set small enough for our intellects to understand. Machine learning models can construct models that work — for example, they accurately predict the probability of medical conditions — but that cannot be reduced enough for humans to understand or to explain them.
AI systems ought to be required to declare what they are optimized for.
we don’t have to treat AI as a new form of life that somehow escapes human moral questions. We can treat it as what it is: a tool that should be measured by how much better it is at doing something compared to our old way of doing it: Does it save more lives? Does it improve the environment? Does it give us more leisure? Does it create jobs? Does it make us more social, responsible, caring? Does it accomplish these goals while supporting crucial social values such as fairness?
By treating the governance of AI as a question of optimizations, we can focus the necessary argument about them on what truly matters: What is it that we as a society want from a system, and what are we willing to give up to get it?
Source: ‘Red Oceans’: How to Find Profitable Startup Ideas – Capital & Growth
Some entrepreneurs start businesses based upon their hobbies or avocational interests. They turned their passion for a particular activity or subject into a business venture. … Often these businesses are not started to reap considerable profits, but instead to pursue a lifestyle that brings joy and personal satisfaction to the entrepreneur.
But if the market for what you like, what interests you or where you live is too small or diffused to support your business idea, then creating that business is a fool’s errand (unless you have money to burn).
The External Approach (or the “Outside-In approach”) to discovering viable business ideas looks first to the external market (verses the skills, knowledge and tastes of the entrepreneur) and tries to methodically discover market gaps that already exist.
- Observing the Market
- Focus Groups
- Reverse Brainstorming
- Market Growth
- Matrix Charting for Insights
- The “Slice of Life” Approach
- The Market-Area Saturation Approach
- The Competitive Matrix Approach
Source: Conflict Vs. Mistake | Slate Star Codex, by Scott Alexander
Person A and Person B disagree. Why do they disagree?
Do they want the same thing, but one or both people are making a mistake in reasoning due to a lack of information or understanding?
Do they want different, incompatible things in conflict with one another and at most one of them can get what they want?
Can they even agree on what they disagree about (goal or solution/process), or is one or both of them convinced that the other is being deceitful in their arguments and reasoning?
From the comments:
Mistake theories are best suited to the task of avoiding negative-sum conflicts. Conflict theories are best suited to the task of winning zero-sum conflicts.
having power is more important than convincing people that your ideas are correct
http://ncase.me/trust// (a web game about trust)
Source: Something doesn’t ad up about America’s advertising market – Schumpeter | The Economist
The immense sums being bet on advertising raise a question: how much of it can America take? A back-of-the-envelope calculation by Schumpeter suggests that stock prices currently imply that American advertising revenues will rise from 1% of GDP today, to as much as 1.8% of GDP by 2027 … First, the irritation factor, or how much consumers can absorb without being put off. … The second limit on the size of the advertising market is how much cash all other firms, in aggregate, have at their disposal to spend on ads.
Stockmarket investors are wrong to expect an enormous surge in advertising revenues
Source: ‘Never get high on your own supply’ – why social media bosses don’t use social media
I used to look at the heads of the social networks and get annoyed that they didn’t understand their own sites. Regular users encounter bugs, abuse or bad design decisions that the executives could never understand without using the sites themselves. How, I would wonder, could they build the best service possible if they didn’t use their networks like normal people?
Now, I wonder something else: what do they know that we don’t?
Developers of platforms such as Facebook have admitted that they were designed to be addictive. Should we be following the executives’ example and going cold turkey – and is it even possible for mere mortals?