The Future of Life Institute (FLI) on July 1st awarded seven million dollars in grants to 37 research teams to
investigate how to optimize the development of artificial intelligence (AI),
while avoiding disastrous scenarios of runaway AI that might involve harm to
the economy, society, and humanity in general.
The grant program is dedicated to “keeping AI robust and beneficial.”
Tom
Dietterich, President of the Association for the Advancement of Artificial Intelligence (AAAI) received one of these grants, which will enable him to study “Robust and Transparent Artificial Intelligence Via Anomaly Detection and Explanation.”
Here’s what he had to say about the
relationship between these grants and the organization he heads:
“One of the
purposes of AAAI is to promote the responsible application of AI technology.
Hence, we welcome the grants announced by the Future of Life Institute.
Many of the grant recipients are members of AAAI, and their engagement in
the Future of Life Institute program reinforces the goals of our association.
We encourage other organizations and funding agencies to join in the effort to
ensure safe AI for future generations.”
Mr.
Dietterich also wanted to clear up some “misconceptions” he feels are being
propagated in coverage of the dangers of AI, as exemplified in a CNET article
entitled “Elon Musk-backed group gives $7M to explore artificial intelligence risks.”
According to Dietterich:
“The CNET
article continues to propagate two ideas that are misleading. The first idea is
that current AI systems are not as smart as humans but that someday soon they
will be. This reflects the misconception that “intelligence” is a
one-dimensional quantity, like temperature. In fact, there are many many
different dimensions of intelligent behavior. AI systems are already more
intelligent than people when measured along some of these dimensions – for example
in their ability to do complex calculations and their ability to organize the
entire contents of the web and answer questions about them. AI systems
are much less capable than people along many other dimensions. Over time,
we can expect AI systems to exceed human capabilities in many more dimensions,
but perhaps not in all. Unlike in the movies, an AI system does not
suddenly 'wake up' one day and discover that it is intelligent. AI progresses
by the accumulation of many research innovations in many different directions.
“The second
misconception is that increasing capabilities of AI will be the primary cause
of “loss of control” of these systems. There are certainly scenarios
where this could be true, and some of the research funded by FLI will explore
these issues and possible counter-measures. But anyone who has programmed a
computer knows that software bugs can lead to a “loss of control”. Fortunately,
today’s computers can generally be disabled by hitting control-C or
rebooting. As hardware designers and software engineers work to make
computers more secure against cyberattacks, the risk of making them harder to
kill when a bug is encountered will also increase. In short, AI is just
one factor that may lead to the loss of control of computer systems.”
The FLI
also awarded $1.5 million for a “Strategic Research Center for Artificial
Intelligence,” whose Primary Investigator will be Nick Bostrom, author of Superintelligence: Paths, Dangers, Strategies,
a book that was, in part, responsible for the recent surge in public warnings
by leading technologists such as Elon Musk, Steven Hawking, and Bill Gates that
humanity needs to carefully monitor the development of AI in order to avoid
possible catastrophe.
Also
included among the grants was $180,000 awarded to Wendell Wallach, author of
the recently-published A Dangerous Master: How to Keep Technology from Slipping Beyond Our Control
for a “Conference and Education” project focusing on “Control and Responsible
Innovation in the Development of Autonomous Machines,” which should allow this
Yale professor to expand and deepen his work on this subject.
The
awarding of these grants and the content that will emerge from their
application could lead to a profound re-evaluation of the path forward in the
development of intelligent machines and their impact on humanity. At the least, the work thus financed should
provide extensive food for thought as artificial intelligence plays an
increasingly large role in the economy, society, and culture, and in the
individual lives of human beings.
Elon Musk
may have suffered a corporate and personal set-back when SpaceX’s Falcon 9
rocket and Dragon spacecraft exploded on June 28th over the Atlantic
Ocean, but his funding of this research into making AI safer, may, in the end,
be as significant an achievement on behalf of humanity as are his efforts to
make mankind a multi-planet species, whatever the outcome in that regard.
No comments:
Post a Comment