Elon Musk expresses his concerns on the subject of artificial intelligence at the Code Conference 2016:
From Google to IBM, Amazon to Facebook, everyone’s investing loads of money in Artificial Intelligence. The goal is to develop a new kind of AI that can make independent decisions, equaling or even superseding human brains.
On the sidelines of this frantic race, some companies, CEOs, world-renowned scientists, and visionaries are raising some ethical concerns: what will be the consequences for humanity, should AI rival the human brain, surpass it, or make decisions for us? To answer this question and frame an ethical approach to AI R&D, Elon Musk created OpenAI, a non-profit research lab whose mission is to develop a form of AI that is beneficial to humanity.
Founded in December 2015, the organization has already garnered support from legendary philanthropist Bill Gates, physicist Stephen Hawking, scientist Stuart Russell, PayPal cofounder Peter Thiel, and Sam Altman, president of Y Combinator (the world’s most fertile start-up incubator). OpenAI also collaborates with other institutions and independent researchers, notably by making its patents and results available to the public via open-source software.
With a budget of $1 billion, OpenAI has already fine-tuned several tools that measure AI learning curves, like OpenAI Gym and more recently Universe, a software platform based on “learning by reinforcement.” They enable researchers to design AI that can interact with environments conceived by humans (video games, the Internet, computers…). The goal is to enable AI to reproduce human behavior through trial and error. Imagine that!
Elon Musk shares his concerns on artificial intelligence.