Saturday, March 28, 2015

Too much of the “sharing economy” in Palo Alto? “You kids, get off my lawn!”


Go quickly, and you’ll see a post by CBS Technology Analyst Larry Magid about a sharing experience in Palo Alto that got out of hand at the top of his Facebook page, at:  https://www.facebook.com/larrymagid?fref=nf

This incident calls to mind a similar one recently reported on in (). the article “Palo Alto opts not to regulate Airbnbrentals--City Council decides issue isn't urgent enough,” by Gennady Sheyner in the Palo Alto Weekly:

 “Resident Marvin Weinstein described the ‘Airbnb nightmare’ that occurred on his block when a homeowner who lives in Vietnam decided to put his house on Airbnb and to enlist a management company in San Francisco to oversee the rental. That company, Weinstein said, never came to the house to take a look at the conditions. 

“’We suddenly find ourselves in a situation where people are coming for three- or four-day weekends at a time – groups of 10 or more to live in a single house, parking up the entire street, partying to three or four in the morning.’”

Is the combination of role models from the “Hangover” movies and better distributed computing leading to the creation of “instant spring breaks” and “airbnb flash-mobs”?

Where is the reality series that will capitalize on this emerging recreational trend?  What about a Periscope or Meerkat live feed from the scene?  Or several, from the perspective of multiple participants?  What kinds of behavior are we likely to see encouraged by the opportunity to be on a viral world-wide live video feed?



Thursday, March 26, 2015

Fabled attorney Bruce Margolin is “down for the cause” and hoping to forge a consensus on recreational cannabis legalization in California



Bruce Margolin, a criminal defense attorney who spends his days in and out of court trying to keep others out of jail because of alleged cannabis-related legal issues, has been working to negotiate a consensus version of the initiative that could legalize cannabis for recreational use in California in 2016. 

He called your correspondent’s attention to the efforts to create that consensus by ReformCA.org, whose website provides an infrastructure for an online conversation about all aspects of the initiative-to-be.

He recently met with representatives of Washington, Oregon, and Colorado, states with a record of successful recreational cannabis legalization, to glean knowledge from them based on their experiences with the legalization process.

He pointed out that while he hopes for a single consensus initiative going forward, he is concerned that a proliferation of variations of legalization initiatives may dilute what is emerging as a strong tide of support for cannabis legalization in California, as demonstrated by the results of the latest Public Policy Institute of California poll on the subject, which you can read about here. 

For more about this poll, as well as news about California Lieutenant Governor Gavin Newsom’s commission on cannabis legalization, click here.

Margolin pointed out that, due to the low turnout in the last gubernatorial election, it will take fewer signatures to qualify a ballot initiative in the current electoral cycle, possibly making it easy enough for several cannabis legalization measures to qualify for the ballot.  He referred to the City of Los Angeles’ recent election on medical cannabis dispensaries, which featured more than one option and ended up with a result that not everyone thinks was optimal.

It would be ironic if recreational cannabis legalization proponents are unable to unite to take advantage of the shift in public opinion towards their point of view and none of the proposed initiatives qualifies for the ballot or can win a majority of votes once submitted to the voters.  Bruce Margolin is working hard to avoid that irony.

Wednesday, March 18, 2015

The danger of exempting “autonomous” weapons from a ban by calling them “semi-autonomous”



Mark Gubrud is a physicist with an interest in stopping a new robotic arms race to create powerful weapons that operate outside of human control.  


In a recent e-mail to him, I mentioned that not everyone would be likely to accept his argument that Lockheed Martin’s Long Range Anti-Ship Missile (LRASM) was in fact the kind of “lethal autonomous weapons system” (LAWS) to be discussed and possibly recommended for regulation or a ban by the informal Meeting of Experts being convened by the United Nations in Geneva on April 13th – 17th, as part of an updating of the UN Convention on Certain Conventional (i.e., non-nuclear) Weapons.



Gubrud responded:



 “I think the "other players" who "will refuse to acknowledge" the issues I've raised about the lack of any clear line between "semi-autonomous" and "autonomous" weapons, as defined by the Pentagon, are just those who either don't want any effective arms control (people like Paul Scharre and Michael Horowitz, and those currently in control of US policy), or else do want to "stop killer robots" but are afraid to engage the complexities of the issue, finding it more convenient to pretend this is all about things that lie in the future and that everyone can recognize as "killer robots."

“The latter approach will fail, of course, because meanwhile we are developing actually autonomous weapons of increasing sophistication making increasingly complex decisions fully autonomously, and if we keep pretending these are not the "fully autonomous weapons" we're worried about, we'll eventually get to a point where we won't know how to define that distinction and everyone will say "it's too late" to stop an arms race that will already be in full charge.

“In reality, the autonomous arms race is already underway, and it consists almost entirely of things that fit under 3000.09's definition of "semi-autonomous." So, if you want to stop this, that is where you have to direct your attention.”



Left out entirely so far in this discussion is the UCLASS (Unmanned Carrier-Launched Airborne Surveillance and Strike) system, whose developmental direction is uncertain as the Pentagon, Congressional leaders, and defense contractors very non-transparently try to reach agreement on what features to include in the RFP for building this aircraft that will go out to Boeing, Northrop Grumman, Lockheed Martin, and General Atomics, all of whom would like the multi-billion dollar contract for the system.



More information about the UCLASS debate can be found on the Facebook page of UCLASS Watch, here.



The UCLASS system is, in effect, an unmanned fighter jet, capable of autonomous operation and no doubt capable, should its developers choose to make it so, of autonomously selecting and engaging hostile targets without effective human intervention, even if the Pentagon insists on saying that, like the LRASM, UCLASS is only “semi-autonomous” and therefore cannot be regulated or prohibited under the terms of any agreement designed to stop an arms race in “autonomous” weapons.



Lockheed Martin has so far declined to respond to an Etopia News inquiry as to whether or not it considers the LRASM to be “autonomous.”  No effort has yet been made to get the views of the defense contractors who want to build the UCLASS about whether or not this system is “autonomous” either.  Keep reading this Etopia News blog for further updates.


Thursday, March 12, 2015

Campaign is underway to stop killer robot arms race that could harm efforts to create “Friendly AI”


As anyone who's seen Terminator can tell you, weaponized artificial intelligence is no friend of the future of humanity.  Terminator, of course, is fiction, but the coming wave of “lethal autonomous weapons systems” (LAWS) is fact, and some close observers of the process of their emergence are sounding alarm bells and calling for their prohibition.

An informal “Meeting of Experts” on the subject of LAWS will take place at the United Nations Office at Geneva between April 13th and April 17th, 2015.  You can access their agenda here.

Heather Roff, a Visiting Professor at the Josef Korbel School of International Studies, and a research associate at the Eisenhower Center for Space and Defense Studies at the United States Air Force Academy, will be appearing (along with Stuart Russell, a member of the scientific advisory board at the Future of Life Institute) as invited experts there to make the case for banning weaponized artificial intelligence in the form of lethal autonomous weapons systems or "killer robots."

In an e-mail to Etopia News, Professor Roff said that:

From my perspective, AWS [autonomous weapons systems] have the potential to act as a catalyst towards developing stronger and stronger AI.  The worry, of course, is that this AI will be for lethal purposes, armed with munitions, and not created for beneficial purposes for humankind.  States may feel the need to engage in an AI arms race if they see any one state dominating the technological developments on AWS, thus hastening the development of an AI that is not created with the correct ends in view.”  

Professor Roff is a member of the International Committee for Robot Arms Control, an NGO that is an active supporter and member of the Steering Committee of the Campaign to Stop Killer Robots.  You can find a list of other NGOs involved with the Campaign to Stop Killer Robots here.  You can learn more about the Meeting of Experts in Geneva here. 

Also attending the Meeting of Experts on LAWS in Geneva will be Mark Gubrud, a physicist with an interest in robot arms control whose blog features a discussion of what exactly constitutes an "autonomous" lethal weapons system. .

A wide-ranging discussion of, and efforts to solve, “the control problem” for the “superintelligence” that could emerge from current research and development in artificial intelligence (AI) are already taking place at such institutions as the Machine Intelligence Research Institute (MIRI) and in such books as Superintelligence: Paths, Dangers, Strategies, by Nick Bostrom, who is, incidentally, a Member of the Scientific Advisory Board of the Future of Life Institute.

Efforts to engineer a “controlled detonation” of the “intelligence explosion” expected from the development of AGI (artificial general intelligence) or “hard AI” are intended to prevent the instantiation or appearance of a ASI (artificial superintelligence) with malign effect on mankind.  An AI arms race would mean developing more and more powerful AIs of a type not necessarily aligned with more general and benevolent human interests.  Clearly, more attention needs to be paid to the issue of human control of both weapons systems and non-military applications of the increasingly powerful AI now available, before a system is created that is too ubiquitous and too powerful to control at all.