Tuesday, April 30, 2013

Philosophy videos


Philosophical Installations

Deceased--Space Telescope Herschel

 Space Telescope Herschel
May 14th, 2009 to April 29th, 2013

"Space Telescope Herschel Ran Out of Liquid Helium Today as Planned"

by

Mark Hoffman

April 29th, 2013

SCIENCE WORLD REPORT

The crucial liquid helium coolant of ESA’s Herschel space observatory ran out today, which means the end of the device that served astronomers for more than three exciting years studying the cool Universe.

ESA’s Herschel infrared observatory has an unprecedented view on the cold Universe, bridging the gap between what can be observed from the ground and earlier infrared space missions, and bringing to light previously unseen star-forming regions, molecular clouds and galaxies enshrouded in dust. This artist’s impression of ESA’s Herschel space observatory is set against a background image showing baby stars forming in the Rosette Nebula. The image is a three-colour composite made by Herschel’s Photoconductor Array Camera and Spectrometer (PACS) and the Spectral and Photometric Imaging Receiver (SPIRE) at wavelengths of 70 microns (blue), 160 microns (green) and 250 microns (red).

A pioneering mission, it was the first telescope to cover the entire wavelength range from the far-infrared to submillimetre, making it possible to study previously invisible cool regions of gas and dust in the cosmos, and providing new insights into the origin and evolution of stars and galaxies. Herschel was launched on 14 May 2009 and, with a main mirror 3.5 m across, it is the largest, most powerful infrared telescope ever flown in space.

In order to make such sensitive far-infrared observations, the detectors of the three science instruments – two cameras/imaging spectrometers and a very high-resolution spectrometer – are housed inside a giant thermos flask known as a cryostat so they can be cooled down to –271°C, close to absolute zero. This is achieved by a finite amount of superfluid liquid helium that evaporates over time, gradually emptying the helium tank and thus determining Herschel’s scientific life. At launch, the cryostat was filled to the brim with over 2300 litres of liquid helium, weighing 335 kg, for 3.5 years of operations in space.

But in this limited time, Herschel has made extraordinary discoveries across a wide range of topics, from starburst galaxies in the distant Universe to newly forming planetary systems orbiting nearby young stars. The science observing programme was carefully planned to take full advantage of the lifetime of the mission, with all of the highest-priority observations now completed.

It is planned to propel Herschel into its long-term stable parking orbit around the Sun in early May.

The confirmation that the helium is finally exhausted came this afternoon at the beginning of the spacecraft's daily communication session with its ground station in Western Australia, with a clear rise in temperatures measured in all of Herschel's instruments.

"Herschel has exceeded all expectations, providing us with an incredible treasure trove of data that will keep astronomers busy for many years to come," says Prof. Alvaro Gimenez, ESA's Director of Science and Robotic Exploration.

Herschel has made over 35 000 scientific observations, amassing more than 25 000 hours of science data from about 600 observing programmes. A further 2000 hours of calibration observations also contribute to the rich dataset, which is based at ESA's European Space Astronomy Centre, near Madrid in Spain.

The archive will become the legacy of the mission. It is expected to provide even more discoveries than have been made during the lifetime of the Herschel mission.

"Herschel's ground-breaking scientific haul is in no little part down to the excellent work done by European industry, institutions and academia in developing, building and operating the observatory and its instruments," adds Thomas Passvogel, ESA's Herschel and Planck Project Manager.

The mission resulted in a number of technological advancements applicable to future space missions and potential spin-off technologies. The mission saw the development of advanced cryogenic systems, the construction of the largest telescope mirror ever flown in space, and the utilisation of the most sensitive direct detectors for light in the far-infrared to millimetre range. Manufacturing techniques enabling the Herschel mission have already been applied to the next generation of ESA's space missions, including Gaia.

"Herschel has offered us a new view of the hitherto hidden Universe, pointing us to previously unseen processes of star birth and galaxy formation, and allowing us to trace water through the Universe from molecular clouds to newborn stars and their planet-forming discs and belts of comets," says Goeran Pilbratt, ESA's Herschel Project Scientist.

Herschel's stunning images of intricate networks of dust and gas filaments within our Milky Way Galaxy provide an illustrated history of star formation. These unique far-infrared observations have given astronomers a new insight into how turbulence stirs up gas in the interstellar medium, giving rise to a filamentary, web-like structure within cold molecular clouds.

If conditions are right, gravity then takes over and fragments the filaments into compact cores. Deeply embedded inside these cores are protostars, the seeds of new stars that have gently heated their surrounding dust to just a few degrees above absolute zero, revealing their locations to Herschel's heat-sensitive eyes.

Over the first few million years in the life of newborn stars, the formation of planets can be followed in the dense discs of gas and dust swirling around them. In particular, Herschel has been following the trail of water, a molecule crucial to life as we know it, from star-formation clouds to stars to planet-forming discs.

Herschel has detected thousands of Earth ocean's worth of water vapour in these discs, with even greater quantities of ice locked up on the surface of dust grains and in comets.

Closer to home, Herschel has also studied the composition of the water-ice in Comet Hartley-2, finding it to have almost exactly the same isotopic ratios as the water in our oceans.

These findings fuel the debate about how much of Earth's water was delivered via impacting comets. Combined with the observations of massive comet belts around other stars, astronomers hope to understand whether a similar mechanism could be at play in other planetary systems, too.


Herschel Space Observatory [Wikipedia]

Look what the 70's hath brought..."Legs & Co."


Somehow I missed this female dance troupe...I was lucky I guess. Reminds me of [Benny] "Hill's Angels".



Legs & Co. [Wikipedia]

Here are Benny Hill's "Hill's Angels"...some similarity...



Become a chemist...not that easy now


From a pulp scifi magazine from the 40's.

Balloon dinosaur



That's what college students used to do. Now it is balloon dinosaur. Well, it is interesting.


"WATCH: Artists Create A 20-Foot Dinosaur Out of Balloons"

by

Melissa Locker

April 30th, 2013

Time

Clowns who can twist a balloon into a dog or bear are pretty impressive. But they have nothing on airigami a balloon artist collective lead by Larry Moss. His team crafted a 20-foot dinosaur out of balloons in the middle of the Virginia Museum of Natural History. The result is like a family-friendly version of Jurassic Park 3D, with a towering acrocanthosaurus – a dinosaur from the early cretaceous period–looming over spectators.

The enormous balloon sculpture took four days to build, with the help of a team of balloon artists, museum staff and some school children. This time lapse video allows you two watch the action in under two minutes, showing the intricate planning and careful construction that goes into building 20-foot balloon dinosaur. On the airigami blog, Moss writes that building the acrocanthosaurus in the museum aided the elaborate process, “Having a life-size model next to us sure made the construction a lot easier. Our design, based on images we found online, was pretty accurate, but it was nice to be able to look up and take measurements off of the real thing.”

If you are in Virginia and want to see the sculpture, you better hurry. The acrocanthosaurus will only be around as long as the air remains in the balloons. Then, it too will go the way of the dinosaurs.



Pensions are becoming novel in payments


Well, it might work but could be embarrassing...Trojan.

"Kodak to Pay Retired Workers in Film (And Other Adventures in Creative Pension Funding)"

by

Dan Kadlec

April 30th, 2013

Time

Eastman Kodak has reached a deal to hand over its film business to retirees in lieu of paying monthly benefits, making it the latest strapped corporation to resort to non-cash pension contributions.

Kodak, restructuring under bankruptcy protection, will turn over its personalized imaging and document imaging businesses to the U.K. Kodak Pension Plan, according to a report in the Wall Street Journal. This will wipe out a $2.8 billion pension obligation. If all goes well, the pension will sell the businesses and use the proceeds to fund retirement benefits. But no one knows what the businesses will fetch.

In the news game, we like to say three makes a trend. So the Kodak deal raises troubling questions about our collective retirement security. The British food company Dairy Crest recently transferred 44 million pounds of cheese to its pension fund to help plug a $128 million deficit. The spirits producer Diageo (Johnny Walker, Smirnoff) gave more than $760 million in “maturing whiskey” to its retirement fund to help quench a more than $1 billion pension thirst.

What other assets might retirees be asked to accept? Can they be properly valued? How do you turn bricks and mortar, or whatever, into a reliable and long-lasting income stream?

Such creative solutions are born of necessity. Two-thirds of the companies in the S&P 500 have traditional pension plans and only 18 of them are fully funded. Unfunded liabilities total $355 billion. This massive shortfall, by the way, is echoed in public pension plans as well. We have a big mess on our hands.

As a matter of fact, this trend stretches way beyond three examples. U.S. Steel transferred 170,000 acres of Alabama timberland to its workers’ pension fund several years ago. The state of Alabama’s pension system owns 11 golf courses, and a string of hotels and spas.

The New York Times reports that the Pension Benefit Guaranty Corporation, which takes over failed pension plans, has a wide variety of alternative assets, including “water rights in the Mojave Desert, diamonds, oil wells, a hog-slaughtering facility, a restaurant, a hyperbaric chamber, a brewery in Philadelphia, a lien on a terminal at Kennedy International Airport and a stake in a nuclear fuel-reconditioning partnership.”

Pension systems are starting to look like private equity firms. This isn’t how it’s supposed to work.

How we got here doesn’t really matter. Let’s just say there’s a long history of gaming the pension system in the U.S., where financial engineers have found ways to loot once-flush pension plans and policymakers have found ways to paper over the shortfalls. Add in a dozen years of abnormally low investment returns and the demographic time bomb of retiring baby boomers and long-lived elders.

You may argue that pension plans are better off accepting alternative assets than getting nothing at all, and that is probably the case. Yet if an employer can hand over an asset that is difficult to value, and assume a high rate of return on that asset, it permits the employer to contribute even less cash going forward—further eroding the plan’s viability in the event the rosy assumptions fall short.

Camera film? Cheese? Whiskey? Sounds more like a cocktail party than a pension guarantee.

Sunday, April 28, 2013

Deceased--Edward Allan Frieman

Edward Allan Frieman
January 19th, 1926 to April 11th, 2013

"Edward Frieman dies at 87; leading figure in American science"

With wide-ranging interests, Edward Frieman led the Scripps Institution of Oceanography, advised the U.S. on defense and energy and was a friend of Albert Einstein.

by

Tony Perry

April 28th, 2013

Los Angeles Times

Edward A. Frieman, a leading figure in American science for decades as a researcher with wide-ranging interests, a top-level governmental advisor on defense and energy issues, and director of the Scripps Institution of Oceanography at UC San Diego, has died. He was 87.

Frieman died April 11 at UCSD's Thornton Hospital in La Jolla of a respiratory illness, the university announced.

His legacy extends to leadership posts in academia, government and private industry. There are "not many like him, and he will be sorely missed," said John Deutch, professor at the Massachusetts Institute of Technology and former CIA director and deputy secretary of Defense.

By training, Frieman was a plasma physicist but the depth and breadth of his scientific pursuits extended to hydromagnetics, hydrodynamics, astrophysics, atmospheric sciences and more. He was a pioneer in the field of sustainability and, as director at Scripps from 1986 to 1996, made it a leader in global climate change research.

Ray Weiss, distinguished professor of geochemistry at Scripps, said that Frieman's success at increasing budgets and selecting researchers and research topics "strengthened earth, ocean and environmental sciences across the U.S. and internationally."

In the world of high-level science, known as highly competitive and often with clashing temperaments, Frieman had a reputation for collegiality and modesty.

"Ed was a wonderful friend, always thoughtful and helpful," said Walter Munk, research professor of geophysics at Scripps and a pioneer in wave forecasting and oceanography studies. "I appreciated his elegant shyness."

He enjoyed engaging in discussions with other scientists, particularly about whether certain topics are "applied" or "pure" science, which can be important in seeking government support and funding.

"He said, 'Well, I'm a physicist – to me, all oceanography is applied science,' " said Naomi Oreskes, professor of history and science studies at UC San Diego. "It was a good reminder of how subjective our categories of analysis can be, especially ones we fight about."

Edward Allan Frieman was born in New York on Jan. 19, 1926. During World War II, he served as a deep-sea diving officer; after the war, he participated in the atomic tests at Bikini Atoll, which he later said made a major impression on him about the need to avoid unleashing the destructive power of such weapons.

In the mid-1980s, Frieman was part of delicate behind-the-scenes negotiations that arranged a joint American-Soviet oceanographic expedition, an effort to help defuse tensions between the nuclear superpowers.

After World War II, Frieman received a bachelor's degree in engineering from Columbia University, followed by a master's and doctorate in physics from Polytechnic Institute of Brooklyn.

For 25 years he was at Princeton University in several roles, including professor of astrophysical science. He was befriended by Albert Einstein and was selected to work on classified projects involving nuclear fusion. He was also involved in the complexities of submarines, military strategy and naval tactics.

He served on numerous federal advisory committees and left Princeton to become director of the office of energy research and assistant secretary of the Department of Energy during the administration of President Jimmy Carter. Later, he advised the administration of Carter's successor, Ronald Reagan.

In 1981, Frieman became executive vice president for Science Applications International Corp., a high-technology company and defense contractor based in La Jolla. In 1986, he left to join Scripps. In 1991, he helped Scripps win a competition for Navy support for an advanced research vessel.

Bob Knox, Scripps associate director emeritus, said that Frieman was adept at power politics as well as complex science. "He worked long and hard in Washington over years to keep the Navy shipbuilding budget item from being cut," Knox said.

After retiring as Scripps director in 1996, Frieman became director emeritus and continued to be consulted by Scripps and the federal government. He also received the Navy's Superior Public Service Award and served on the board of trustees of the American University in Paris and on the U.S.-Israel Binational Science Foundation.


Edward Allan Frieman Biography by Dennis Monday

Drone ethics


"Can a Drone Murder?"

by

David Swanson

April 24th, 2013

Institute for Ethics and Emerging Technologies

Tuesday’s Senate Judiciary Committee’s subcommittee hearing on drones was not your usual droning and yammering.  Well, mostly it was, but not entirely.  Of course, the White House refused to send any witnesses.  Of course, most of the witnesses were your usual professorial fare. But there was also a witness with something to say.  Farea Al-Muslimi came from Yemen.  His village had just been hit by a drone strike last week.

He described the effects -- all bad for the people of the village, for the people of Yemen, and for the United States and its mission to eliminate all the bad people in the world without turning any of the good people against it.

The usual droning and yammering that preceded and followed this testimony seemed more offensive than usual.  One witness summarized the general position of pointless witnesses who accept all common wisdom and have no information or insights to contribute:

If the drone strikes are part of war, that's fine, she said.  But if they're not part of war, then they're murder.  But since the memos that "legalize" the drone strikes are secret, we don't know whether they're perfectly fine or murder.

That's the common view of things.  But to say it in front of someone who knows something about the killing from the perspective of the victims seems particularly tasteless.

The basic facts are barely in dispute.  A single individual, President Barack Obama, is choosing to send missiles from drones into particular houses and buildings.  Most of the people being killed are innocent and not targeted.  Some of those targeted are not even identified.  Most of the others are identified as run-of-the-mill resisters to hostile foreign occupations of their or neighboring countries.  A handful are alleged to be imminent (meaning eventual theoretical) threats to the United States.  Many could easily have been arrested and put on trial, but were instead killed along with whoever was too close to them.

If this is not part of a war, apparently, then it's murder.

But if it's part of a war, supposedly, it's fine.

It's funny that murder is the only crime war erases.  Believers in civilized warfare maintain that, even in war, you cannot kidnap or rape or torture or steal or lie under oath or cheat on your taxes.  But if you want to murder, that'll be just fine.

Believers in uncivilized war find this hard to grasp.  If you can murder, which is the worst thing possible, then why in the world -- they ask -- can you not torture a little bit too?

What is the substantive difference between being at war and not being at war, such that in one case an action is honorable and in the other it's murder?  By definition, there is nothing substantive about it.  If a secret memo can legalize drone kills by explaining that they are part of a war, then the difference is not substantive or observable.  We cannot see it here in the heart of the empire, and Al-Muslimi cannot see it in his drone-struck village in Yemen.  The difference is something that can be contained in a secret memo.

This is apparently the case no matter whom a drone strike kills and no matter where it kills them.  The world is the battlefield, and the enemies are Muslims.  Young men in predominantly Muslim countries are posthumously declared enemies once a drone has killed them.  They must be enemies.  After all, they're dead.

I wonder how this sounds to a young Muslim man who's taken to heart the lesson that violence is righteous and that war is everywhere at all times.

Do people who blow up bombs at public sporting events think all together differently from people who blow up peaceful villages in Yemen?

Don't tell me we can't know because their memos are secret too.  Those who engage in murder believe that murder is justified.  The reasons they have (secret or known) are unacceptable.  Murder is not made into something else by declaring it to be part of a war.

War is, rather, made criminal by our recognition of it as mass murder.



"Are US drones ethical?"

Whether drones should be used in the US is the wrong question. Americans should be asking: Is it ethical to use drones anywhere? Is it fair to search for security for ourselves at the expense of perpetual insecurity for others?

by

Jack L. Amoureux

April 1st, 2013

The Christian Science Monitor

Recently, concerns about how the US government manages and deploys its fleet of around 7,000 drones have become especially prominent. Drones have become a hot-button issue for a surprisingly diverse set of political actors, but opposition has coalesced around questions of law and procedure, including the constitutional rights of US citizens (those who might be targeted by drone attacks on foreign soil and those whose privacy rights might be violated by surveillance drones over US soil), and the need for greater transparency and regulation.

Some have even raised concerns about the potential use of armed drones by law enforcement in the US. Many companies are now marketing small, armed drones to law enforcement agencies, and some experts see their eventual implementation as “inevitable” – a source of great concern for many.

There is, however, a worrisome void in this debate about US drone policy – the lack of focus on the ethics of drones, whether used domestically or abroad. This neglect puts the United States out of step with the debates that are happening in the areas of the world most affected by drones. Whether or not drones should be employed in the US is the wrong question. Americans should be asking: “Is it ethical to use drones anywhere?”

In researching media coverage of drones over the past 12 years, I have found striking differences in what is reported in the US press relative to Arab media. US news outlets largely ignore pressing ethical questions about drones as a way to wage war and instead fixate on the technological and strategic innovations of drones, their multiple uses, diplomatic intrigue over downed drones in “unfriendly” countries, and whether drone strikes are legal.

In contrast, Arab media tend to focus on the loss of life among families and communities, the multifaceted costs of drones as weapons, and US disregard for other nations’ sovereignty. In covering the Middle East, Afghanistan, and Pakistan, news sources such as Al Jazeera and Asharq Al-Awsat depict individuals who speak of the psychological terror from the daily presence of drones. They share stories of people constantly wondering which patterns of behavior drone controllers find suspicious.

They also reveal a sense of inferiority and embarrassment when a large, powerful country arrives on (over) their soil to make decisions about who will live and die, how much civilian death is acceptable, and how a “militant” will be defined (loosely, it turns out). Citizens in these countries worry that all of these drones are creating even more extremism and terror at home. And they incredulously ask whether drones are not themselves a form of terror.

The American public is not debating these issues and engaging in dialogue with those most affected by US drone policies. If Americans elicited those voices, we could ask: Are we creating acute conditions of insecurity in other countries when individuals constantly live in fear of death falling from the sky? Is it fair to search for security for ourselves at the expense of perpetual insecurity for others? Are drones really the best alternative for the welfare of everyone, both in the short term and long term?

Domestic and international legal questions about drones reflect deeply held American values, but legal discussions fail to make sense of how these values might be reconciled in the face of specific ethical dilemmas. Nor do they recognize and grapple with the values and anxieties of other communities. And both the Bush and Obama administrations have demonstrated that it is easy to provide legal justification for controversial policies. Legal debates can distract us from urgent ethical questions.

Relationships that feature intense violence and vulnerability deserve deep reflection and deliberation. Indeed, if there are to be “new rules” in a continuing and more expansive war against terror (what the Obama administration calls its Overseas Contingency Operation), America should listen to those who are most impacted by those “new rules.”

Perhaps the prospect of armed drones hovering above Americans is ultimately a productive step for taking these ethical questions seriously if it leads us to imagine how whole populations feel about the continuous possibility that right now, in the company of friends and in their own homes, they could be in the crosshairs of a drone.


[Jack L. Amoureux is a visiting assistant professor of politics and international affairs at Wake Forest University who teaches “The Politics of Technology and Violence.”]

"Drone-Ethics Briefing: What a Leading Robot Expert Told the CIA"

by

December 15th, 2011

Patrick Lin

The Atlantic

Last month, philosopher Patrick Lin delivered this briefing about the ethics of drones at an event hosted by In-Q-Tel, the CIA's venture-capital arm. It's a thorough and unnerving survey of what it might mean for the intelligence service to deploy different kinds of robots.

 Robots are replacing humans on the battlefield--but could they also be used to interrogate and torture suspects? This would avoid a serious ethical conflict between physicians' duty to do no harm, or nonmaleficence, and their questionable role in monitoring vital signs and health of the interrogated. A robot, on the other hand, wouldn't be bound by the Hippocratic oath, though its very existence creates new dilemmas of its own.

The ethics of military robots is quickly marching ahead, judging by news coverage and academic research. Yet there's little discussion about robots in the service of national intelligence and espionage, which are omnipresent activities in the background. This is surprising, because most military robots are used for surveillance and reconnaissance, and their most controversial uses are traced back to the Central Intelligence Agency (CIA) in targeted strikes against suspected terrorists. Just this month, a CIA drone --a RQ-170 Sentinel--crash-landed intact into the hands of the Iranians, exposing the secret US spy program in the volatile region.

The US intelligence community, to be sure, is very much interested in robot ethics. At the least, they don't want to be ambushed by public criticism or worse, since that could derail programs, waste resources, and erode international support. Many in government and policy also have a genuine concern about "doing the right thing" and the impact of war technologies on society. To those ends, In-Q-Tel--the CIA's technology venture-capital arm (the "Q" is a nod to the technology-gadget genius in the James Bond spy movies)--had invited me to give a briefing to the intelligence community on ethical surprises in their line of work, beyond familiar concerns over possible privacy violations and illegal assassinations. This article is based on that briefing, and while I refer mainly to the US intelligence community, this discussion could apply just as well to intelligence programs abroad.

BACKGROUND

Robotics is a game-changer in national security. We now find military robots in just about every environment: land, sea, air, and even outer space. They have a full range of form-factors from tiny robots that look like insects to aerial drones with wingspans greater than a Boeing 737 airliner. Some are fixed onto battleships, while others patrol borders in Israel and South Korea; these have fully-auto modes and can make their own targeting and attack decisions. There's interesting work going on now with micro robots, swarm robots, humanoids, chemical bots, and biological-machine integrations. As you'd expect, military robots have fierce names like: TALON SWORDS, Crusher, BEAR, Big Dog, Predator, Reaper, Harpy, Raven, Global Hawk, Vulture, Switchblade, and so on. But not all are weapons--for instance, BEAR is designed to retrieve wounded soldiers on an active battlefield.

The usual reason why we'd want robots in the service of national security and intelligence is that they can do jobs known as the 3 "D"s: Dull jobs, such as extended reconnaissance or patrol beyond limits of human endurance, and standing guard over perimeters; dirty jobs, such as work with hazardous materials and after nuclear or biochemical attacks, and in environments unsuitable for humans, such as underwater and outer space; and dangerous jobs, such as tunneling in terrorist caves, or controlling hostile crowds, or clearing improvised explosive devices (IEDs).

 But there's a new, fourth "D" that's worth considering, and that's the ability to act with dispassion. (This is motivated by Prof. Ronald Arkin's work at Georgia Tech, though others remain skeptical, such as Prof. Noel Sharkey at University of Sheffield in the UK.) Robots wouldn't act with malice or hatred or other emotions that may lead to war crimes and other abuses, such as rape. They're unaffected by emotion and adrenaline and hunger. They're immune to sleep deprivation, low morale, fatigue, etc. that would cloud our judgment. They can see through the "fog of war", to reduce unlawful and accidental killings. And they can be objective, unblinking observers to ensure ethical conduct in wartime. So robots can do many of our jobs better than we can, and maybe even act more ethically, at least in the high-stress environment of war.

SCENARIOS

With that background, let's look at some current and future scenarios. These go beyond obvious intelligence, surveillance, and reconnaissance (ISR), strike, and sentry applications, as most robots are being used for today. I'll limit these scenarios to a time horizon of about 10-15 years from now.

Military surveillance applications are well known, but there are also important civilian applications, such as robots that patrol playgrounds for pedophiles (for instance, in South Korea) and major sporting events for suspicious activity (such as the 2006 World Cup in Seoul and 2008 Beijing Olympics). Current and future biometric capabilities may enable robots to detect faces, drugs, and weapons at a distance and underneath clothing. In the future, robot swarms and "smart dust" (sometimes called nanosensors) may be used in this role.

Robots can be used for alerting purposes, such as a humanoid police robot in China that gives out information, and a Russian police robot that recites laws and issues warnings. So there's potential for educational or communication roles and on-the-spot community reporting, as related to intelligence gathering.

In delivery applications, SWAT police teams already use robots to interact with hostage-takers and in other dangerous situations. So robots could be used to deliver other items or plant surveillance devices in inaccessible places. Likewise, they can be used for extractions too. As mentioned earlier, the BEAR robot can retrieve wounded soldiers from the battlefield, as well as handle hazardous or heavy materials. In the future, an autonomous car or helicopter might be deployed to extract or transport suspects and assets, to limit US personnel inside hostile or foreign borders.

In detention applications, robots could also be used to not just guard buildings but also people. Some advantages here would be the elimination of prison abuses like we saw at Guantanamo Bay Naval Base in Cuba and Abu Ghraib prison in Iraq. This speaks to the dispassionate way robots can operate. Relatedly--and I'm not advocating any of these scenarios, just speculating on possible uses--robots can solve the dilemma of using physicians in interrogations and torture. These activities conflict with their duty to care and the Hippocratic oath to do no harm. Robots can monitor vital signs of interrogated suspects, as well as a human doctor can. They could also administer injections and even inflict pain in a more controlled way, free from malice and prejudices that might take things too far (or much further than already).

And robots could act as Trojan horses, or gifts with a hidden surprise. I'll talk more about these scenarios and others as we discuss possible ethical surprises next.


ETHICAL AND POLICY SURPRISES

Limitations


While robots can be seen as replacements for humans, in most situations, humans will still be in the loop, or at least on the loop--either in significant control of the robot, or able to veto a robot's course of action. And robots will likely be interacting with humans. This points to a possible weak link in applications: the human factor.

For instance, unmanned aerial vehicles (UAVs), such as Predator and Global Hawk, may be able to fly the skies for longer than a normal human can endure, but there are still human operators who must stay awake to monitor activities. Some military UAV operators may be overworked and fatigued, which may lead to errors in judgment. Even without fatigue, humans may still make bad decisions, so errors and even mischief are always a possibility and may include friendly-fire deaths and crashes.

Some critics have worried that UAV operators--controlling drones from half a world away--could become detached and less caring about killing, given the distance, and this may lead to more unjustified strikes and collateral damage. But other reports seem to indicate an opposite effect: These controllers have an intimate view of their targets by video streaming, following them for hours and days, and they can also see the aftermath of a strike, which may include strewn body parts of nearby children. So there's a real risk of post-traumatic stress disorder (PTSD) with these operators.

Another source of liability is how we frame our use of robots to the public and international communities. In a recent broadcast interview, one US military officer was responding to a concern that drones are making war easier to wage, given that we can safely strike from longer distances with these drones. He compared our use of drones with the biblical David's use of a sling against Goliath: both are about using missile or long-range weapons and presumably have righteousness on their side. Now, whether or not you're Christian, it's clear that our adversaries might not be. So rhetoric like this might inflame or exacerbate tensions, and this reflects badly on our use of technology.

One more human weak-link is that robots may likely have better situational awareness, if they're outfitted with sensors that can let them see in the dark, through walls, networked with other computers, and so on. This raises the following problem: Could a robot ever refuse a human order, if it knows better? For instance, if a human orders a robot to shoot a target or destroy a safehouse, but it turns out that the robot identifies the target as a child or a safehouse full of noncombatants, could it refuse that order? Does having the technical ability to collect better intelligence before we conduct a strike obligate us to do everything we can to collect that data? That is, would we be liable for not knowing things that we might have known by deploying intelligence-gathering robots? Similarly, given that UAVs can enable more precise strikes, are we obligated to use them to minimize collateral damage?

On the other hand, robots themselves could be the weak link. While they can replace us in physical tasks like heavy lifting or working with dangerous materials, it doesn't seem likely that they can take over psychological jobs such as gaining the confidence of an agent, which involves humor, mirroring, and other social tricks. So human intelligence, or HUMINT, will still be necessary in the foreseeable future.

Relatedly, we already hear criticisms that the use of technology in war or peacekeeping missions aren't helping to win the hearts and minds of local foreign populations. For instance, sending in robot patrols into Baghdad to keep the peace would send the wrong message about our willingness to connect with the residents; we will still need human diplomacy for that. In war, this could backfire against us, as our enemies mark us as dishonorable and cowardly for not willing to engage them man to man. This serves to make them more resolute in fighting us; it fuels their propaganda and recruitment efforts; and this leads to a new crop of determined terrorists.

Also, robots might not be taken seriously by humans interacting with them. We tend to disrespect machines more than humans, abusing them more often, for instance, beating up printers and computers that annoy us. So we could be impatient with robots, as well as distrustful--and this reduces their effectiveness.

Without defenses, robot could be easy targets for capture, yet they may contain critical technologies and classified data that we don't want to fall into the wrong hands. Robotic self-destruct measures could go off at the wrong time and place, injuring people and creating an international crisis. So do we give them defensive capabilities, such as evasive maneuvers or maybe nonlethal weapons like repellent spray or Taser guns or rubber bullets? Well, any of these "nonlethal" measures could turn deadly too. In running away, a robot could mow down a small child or enemy combatant, which would escalate a crisis. And we see news reports all too often about unintended deaths caused by Tasers and other supposedly nonlethal weapons.

International humanitarian law (IHL)

What if we designed robots with lethal defenses or offensive capabilities? We already do that with some robots, like the Predator, Reaper, CIWS, and others. And there, we run into familiar concerns that robots might not comply with international humanitarian law, that is, the laws of war. For instance, critics have noted that we shouldn't allow robots to make their own attack decisions (as some do now), because they don't have the technical ability to distinguish combatants from noncombatants, that is, to satisfy the principle of distinction, which is found in various places such as the Geneva Conventions and the underlying just-war tradition. This principle requires that we never target noncombatants. But a robot already has a hard time distinguishing a terrorist pointing a gun at it from, say, a girl pointing an ice cream cone at it. These days, even humans have a hard time with this principle, since a terrorist might look exactly like an Afghani shepherd with an AK-47 who's just protecting his flock of goats.

Another worry is that the use of lethal robots represents a disproportionate use of force, relative to the military objective. This speaks to the collateral damage, or unintended death of nearby innocent civilians, caused by, say, a Hellfire missile launched by a Reaper UAV. What's an acceptable rate of innocents killed for every bad guy killed: 2:1, 10:1, 50:1? That number hasn't been nailed down and continues to be a source of criticism. It's conceivable that there might be a target of such high value that even a 1,000:1 collateral-damage rate, or greater, would be acceptable to us.

Even if we could solve these problems, there may be another one we'd then have to worry about. Let's say we were able to create a robot that targets only combatants and that leaves no collateral damage--an armed robot with a perfectly accurate targeting system. Well, oddly enough, this may violate a rule by the International Committee of the Red Cross (ICRC), which bans weapons that cause more than 25% field mortality and 5% hospital mortality. ICRC is the only institution named as a controlling authority in IHL, so we comply with their rules. A robot that kills most everything it aims at could have a mortality rate approaching 100%, well over ICRC's 25% threshold. And this may be possible given the superhuman accuracy of machines, again assuming we can eventually solve the distinction problem. Such a robot would be so fearsome, inhumane, and devastating that it threatens an implicit value of a fair fight, even in war. For instance, poison is also banned for being inhumane and too effective. This notion of a fair fight comes from just-war theory, which is the basis for IHL. Further, this kind of robot would force questions about the ethics of creating machines that kill people on its own.

Other conventions in IHL may be relevant to robotics too. As we develop human enhancements for soldiers, whether pharmaceutical or robotic integrations, it's unclear whether we've just created a biological weapon. The Biological Weapons Convention (BWC) doesn't specify that bioweapons need to be microbial or a pathogen. So, in theory and without explicit clarification, a cyborg with super-strength or super-endurance could count as a biological weapon. Of course, the intent of the BWC was to prohibit indiscriminate weapons of mass destruction (again, related to the issue of humane weapons). But the vague language of the BWC could open the door for this criticism.

Speaking of cyborgs, there are many issues related to these enhanced warfighters, for instance: If a soldier could resist pain through robotics or genetic engineering or drugs, are we still prohibited from torturing that person? Would taking a hammer to a robotic limb count as torture? Soldiers don't sign away all their rights at the recruitment door: what kind of consent, if any, is needed to perform biomedical experiments on soldiers, such as cybernetics research? (This echoes past controversies related to mandatory anthrax vaccinations and, even now, required amphetamine use by some military pilots.) Do enhancements justify treating soldiers differently, either in terms of duties, promotion, or length of service? How does it affect unit cohesion if enhanced soldiers, who may take more risks, work alongside normal soldiers? Back more squarely to robotics: How does it affect unit cohesion if humans work alongside robots that might be equipped with cameras to record their every action?

And back more squarely to the intelligence community, the line between war and espionage is getting fuzzier all the time. Historically, espionage isn't considered to be casus belli or a good cause for going to war. War is traditionally defined as armed, physical conflict between political communities. But because so much of our assets are digital or information-based, we can attack--and be attacked--by nonkinetic means now, namely by cyberweapons that take down computer systems or steal information. Indeed, earlier this year, the US declared as part of its cyberpolicy that we may retaliate kinetically to a nonkinetic attack. Or as one US Department of Defense official said, "If you shut down our power grid, maybe we'll put a missile down one of your smokestacks."

As it applies to our focus here: if the line between espionage and war is becoming more blurry, and a robot is used for espionage, under what conditions could that count as an act of war? What if the spy robot, while trying to evade capture, accidentally harmed a foreign national: could that be a flashpoint for armed conflict? (What if the CIA drone in Iran recently had crashed into a school or military base, killing children or soldiers?)

Law & responsibility

Accidents are entirely plausible and have happened elsewhere: In September 2011, an RQ-Shadow UAV crashed into a military cargo plane in Afghanistan, forcing an emergency landing. Last summer, test-flight operators of a MQ-8B Fire Scout helicopter UAV lost control of the drone for about half an hour, which traveled for over 20 miles towards restricted airspace over Washington DC. A few years ago in South Africa, a robotic cannon went haywire and killed 9 friendly soldiers and wounded 14 more.

Errors and accidents happen all the time with our technologies, so it would be naïve to think that anything as complex as a robot would be immune to these problems. Further, a robot with a certain degree of autonomy may raise questions of who (or what) is responsible for harm caused by the robot, either accidental or intentional: could it be the robot itself, or its operator, or the programmer? Will manufacturers insist on a release of liability, like the EULA or end-user licensing agreements we agree to when we use software--or should we insist that those products should be thoroughly tested and proven safe? (Imagine if buying a car required signing a EULA that covers a car's mechanical or digital malfunctions.)

We're seeing more robotics in society, from Roombas at home to robotics on factory floors. In Japan, about 1 in 25 workers is a robot, given their labor shortage. So it's plausible that robots in the service of national intelligence may interact with society at large, such as autonomous cars or domestic surveillance robots or rescue robots. If so, they need to comply with society's laws too, such as rules of the road or sharing airspace and waterways.

But, to the extent that robots can replace humans, what about complying with something like a legal obligation to assist others in need, such as required by a Good Samaritan Law or basic international laws that require ships to assist other naval vessels in distress? Would an unmanned surface vehicle, or robotic boat, be obligated to stop and save a crew of a sinking ship? This was a highly contested issue in World War 2--the Laconia incident--when submarine commanders refused to save stranded sailors at sea, as required by the governing laws of war at the time. It's not unreasonable to say that this obligation shouldn't apply to a submarine, since surfacing to rescue would give away its position, and stealth is its primary advantage. Could we therefore release unmanned underwater vehicles (UUVs) and unmanned surface vehicles (USVs) from this obligation for similar reasons?

We also need to keep in mind environmental, health, and safety issues. Microbots and disposable robots could be deployed in swarms, but we need to think about the end of that product lifecycle. How do we clean up after them? If we don't, and they're tiny--for instance, nanosensors--then they could then be ingested or inhaled by animals or people. (Think about all the natural allergens that affect our health, never mind engineered stuff.) They may contain hazardous materials, like mercury or other chemicals in their battery, that can leak into the environment. Not just on land, but we also need to think about underwater and even space environments, at least with respect to space litter.

For the sake of completeness, I'll also mention privacy concerns, though these are familiar in current discussions. The worry is not just with microbots, which may look like harmless insects and birds, that can peek into your window or crawl into your house, but also with the increasing biometrics capabilities that robots could be outfitted with. The ability to detect faces from a distance as well as drugs or weapons under clothing or inside a house from the outside blurs the distinction between a surveillance and a search. The difference is that a search requires a judicial warrant. As technology allows intelligence-gathering to be more intrusive, we'll certainly hear more from these critics.

Finally, we need to be aware of the temptation to use technology in ways we otherwise wouldn't do, especially activites that are legally questionable--we'll always get called out for that. For instance, this charge has already been made against our use of UAVs to hunt down terrorists. Some call it "targeted killing", while others maintain that it's an "assassination." This is still very much an open question, because "assassination" has not been clearly defined in international law or domestic law, e.g., Executive Order 12333. And the problem is exacerbated in asymmetrical warfare, where enemy combatants don't wear uniforms: Singling them out by name may be permitted when it otherwise wouldn't be; but others argue that it amounts to declaring targets as outlaws without due process, especially if it's not clearly a military action (and the CIA is not formally a military agency).

Beyond this familiar charge, the risk of committing other legally-controversial acts still exists. For instance, we could be tempted to use robots in extraditions, torture, actual assassinations, transport of guns and drugs, and so on, in some of the scenarios described earlier. Even if not illegal, there are some things that seem very unwise to do, such as a recent fake-vaccination operation in Pakistan to get DNA samples that might help to find Osama bin Laden. In this case, perhaps robotic mosquitoes could have been deployed, avoiding the suspicion and backlash that humanitarian workers had suffered consequently.

Deception

Had the fake-vaccination program been done in the context of an actual military conflict, then it could be illegal under Geneva and Hague Conventions, which prohibit perfidy or treacherous deceit. Posing as a humanitarian or Red Cross worker to gain access behind enemy lines is an example of perfidy: it breaches what little mutual trust we have with our adversaries, and this is counterproductive to arriving at a lasting peace. But, even if not illegally, we can still act in bad faith and need to be mindful of that risk.

The same concern about perfidy could arise with robot insects and animals, for instance. Animals and insects are typically not considered to be combatants or anything of concern to our enemies, like Red Cross workers. Yet we would be trading on that faith to gain deep access to our enemy. By the way, such a program could also get the attention of animal-rights activists, if it involves experimentation on animals.

More broadly, the public could be worried about whether we should be creating machines that intentionally deceive, manipulate, or coerce people. That's just disconcerting to a lot of folks, and the ethics of that would be challenged. One example might be this: Consider that we've been paying off Afghani warlords with Viagra, which is a less-obvious bribe than money. Sex is one of the most basic incentives for human beings, so potentially some informants might want a sex-robot, which exist today. Without getting into the ethics of sex-robots here, let's point out that these robots could also have secret surveillance and strike capabilities--a femme fatale of sorts.

The same deception could work with other robots, not just the pleasure models, as it were. We could think of these as Trojan horses. Imagine that we captured an enemy robot, hacked into it or implanted a surveillance device, and sent it back home: How is this different from masquerading as the enemy in their own uniform, which is another perfidious ruse? Other questionable scenarios include commandeering robotic cars or planes owned by others, and creating robots with back-door chips that allow us to hijack the machine while in someone else's possession.

Broader effects

This point about deception and bad faith is related to a criticism we're already hearing about military robots, which I mentioned earlier: that the US is afraid to send people to fight its battles; we're afraid to meet the enemy face to face, and that makes us cowards and dishonorable. Terrorists would use that resentment to recruit more supporters and terrorists.

But what about on our side: do we need to think how the use of robotics might impact recruitment in our own intelligence community? If we increasing rely on robots in national intelligence--like the US Air Force is relying on UAVs--that could hurt or disrupt efforts in bringing in good people. After all, a robotic spy doesn't have the same allure as a James Bond.

And if we are relying on robots more in the intelligence community, there's a concern about technology dependency and a resulting loss of human skill. For instance, even inventions we love have this effect: we don't remember as well because of the printing press, which immortalizes our stories on paper; we can't do math as well because of calculators; we can't recognize spelling errors as well because of word-processing programs with spell-check; and we don't remember phone numbers because they're stored in our mobile phones. In medical robots, some are worried that human surgeons will lose their skill in performing difficult procedures, if we outsource the job to machines. What happens when we don't have access to those robots, either in a remote location or power outage? So it's conceivable that robots in the service of our intelligence community, whatever those scenarios may be, could also have similar effects.

Even if the scenarios we've been considering end up being unworkable, the mere plausibility of their existence may put our enemies on point and drive their conversations deeper underground. It's not crazy for people living in caves and huts to think that we're so technologically advanced that we already have robotic spy-bugs deployed in the field. (Maybe we do, but I'm not privileged to that information.) Anyway, this all could drive an intelligence arms race--an evolution of hunter and prey, as spy satellites had done to force our adversaries to build underground bunkers, even for nuclear testing. And what about us? How do we process and analyze all the extra information we're collecting from our drones and digital networks? If we can't handle the data flood, and something there could have prevented a disaster, then the intelligence community may be blamed, rightly or wrongly.

Related to this is the all-too-real worry about proliferation, that our adversaries will develop or acquire the same technologies and use them against us. This has borne out already with every military technology we have, from tanks to nuclear bombs to stealth technologies. Already, over 50 nations have or are developing military robots like we have, including China, Iran, Libyan rebels, and others.

CONCLUSION

The issues above--from inherent limitations, to specific laws or ethical principles, to big-picture effects-- give us much to consider, as we must. These are critical not only for self-interest, such as avoiding international controversies, but also as a matter of sound and just policy. For either reason, it's encouraging that the intelligence and defense communities are engaging ethical issues in robotics and other emerging technologies. Integrating ethics may be more cautious and less agile than a "do first, think later" (or worse "do first, apologize later") approach, but it helps us win the moral high ground--perhaps the most strategic of battlefields.

Fire Ice or Methane Hydrate...new energy source?


"Demystifying Fire Ice: Methane Hydrates, Explained"

In Japan, energy companies are targeting pockets of methane hydrate, colloquially called "fire ice," deep under the sea.

by

Jon M. Chang

March 19th, 2013

Popular Mechanics

The Japan Oil, Gas, and National Metals Corporation (JOGMEC) announced last week that it had successfully extracted fuel from a deep-sea bed of methane hydrate located off the coast of Shikoku Island. This particular deposit of methane hydrate (also known as fire ice) contains an estimated 40 trillion cubic feet of natural gas, equal to 11 years' worth of gas consumption in Japan. The country, as the world's No. 1 importer of natural gas and a country still recovering from the Fukushima nuclear disaster, could see this natural gas as a major part of its energy consumption over the next decade.

But what exactly is fire ice?

Methane hydrate's exotic nickname is a reference to the compound's chemical structure: molecules of methane gas trapped within a cage of solid water molecules. The cage does not form in everyday conditions. To make the hydrate, both the methane and water need to be in an environment with the right temperature and pressure.

Timothy Collett, a research geologist at the United States Geological Survey (USGS), says that these conditions exist naturally either buried under Arctic soil or, as with the Shikoku deposit, buried in a marine basin. When taken outside of these conditions, methane hydrate doesn't last for long. "It would dissociate within minutes, maybe an hour," he says.

Methane hydrate hasn't always been seen as a fuel source. In the 1940s, it was a nuisance: Engineers discovered the material clogging up the natural gas pipelines. They realized that gas was mixing with water and forming large chunks of methane hydrate, which caused blockages. Even today, gas pipeline companies spend a significant portion of their operational budgets—as high as 10 percent, according to Collett—to prevent these blockages. It wasn't until the 1960s that scientists discovered that methane hydrate exists in nature, and into the 1980s that they saw it as a potential source of fuel.

Scouting for usable methane hydrate deposits is still a work in progress. For now, the process mimics similar work in finding oil or gas. Collecting seismic data has revealed some methane hydrate deposits. "They're solid, so they have a high acoustic velocity," Collett says, "but the signal appears different than one propagated through regular soil." A current estimate suggests that there is approximately 100,000 trillion cubic feet of methane gas locked in hydrates, but that only about 10 percent of that is commercially viable to extract—the rest is scattered in small pockets.

Another stumbling block in making methane hydrate usable is figuring how to acquire the gas from the solid. Because the methane hydrate solid is stable only within a set range of temperatures and pressures, altering those conditions would liberate the gas from its water cage, letting people extract it. Companies are experimenting with a depressurization method, which works by drilling a wellbore into the deposit itself and pumping out all the excess fluid. With less surrounding fluid, the pressure drops, prompting the methane hydrate to dissociate. The depressurization method worked at the smaller Mallik Gas Hydrate Research Well in northern Canada, as well as at the large one beneath Japanese waters. Heating the deposits could also release the gas but is too energy-intensive to be worthwhile.

But what if the earth released the gas as a result of heating up? Not only energy companies but also scientists studying climate change have a major interest in methane hydrates. Methane is a greenhouse gas, a far more powerful one than carbon dioxide, and some scientists fear the warming of the earth could destabilize hydrates to the point that they release methane into the atmosphere, further worsening global warming. Ideas such as the clathrate gun hypothesis suggest that methane hydrate dissociation is linked to prehistoric global warming.

However, according to a Nature Education paper published by the USGS, only about 5 percent of the world's methane hydrate deposits would spontaneously release the gas, even if global temperatures continue rising over the next millennium. In addition, bacteria in the nearby soil can consume and oxidize the methane so that only a minute fraction (as low as 10 percent of the dissociated methane) ever reaches the atmosphere.

There's another potential method for extracting methane that might actually help to combat climate change. Research at the Ignik-Sikumi well, located off Alaska's North Slope, has shown that carbon dioxide can replace methane within the ice cage. Once the carbon dioxide is locked in, the water cage binds even tighter, leaving no room for the methane to reenter. Collett says this way of extracting methane gas for fuel could one day double as a way to sequester carbon CO2.

It will still be years before methane hydrates become a commercial source of fuel, and not only because the technology is still young. Even with the infrastructure in place, a methane hydrate well can take years before it regularly produces fuel. Depressurizing methane hydrate doesn't happen all at once but slowly propagates through the entire deposit.

Collett sees gas hydrates as a science project at this stage, saying that it's still in the early stages of research and development. JOGMEC itself acknowledges that it still has many questions to answer. In a translated statement, officials say that the current project off Shikoku will also acquire data about the well's impact on the marine environment. "It's one of those things that has a great potential," Collett says, "but it's still only a potential."



Methane clathrate [Wikipedia]

"Tinkerbella nana" is just 250 micrometers in length




"The real Tinker Bell: Scientists discover new species of minute fairyfly that is just one quarter of a millimetre long"

by

Becky Evans

April 25th, 2013

Daily Mail

It may not be as common as the Peter Pan fairy, but this newly discovered insect is the real Tinker Bell.

The fairyfly species named Tinkerbella nana was discovered in the forests of Costa Rica.

The miniscule specimens were all just 250 micrometres long - or one quarter of a millimetre - and collected by sweeping the forest floor.

Tinkerbella nana was collected at the La Selva Biological Station and named as the newest fairyfly species.

When viewed under a microscope, Tinkerbella nana's long thin wings become visible.

Each is fringed with long hair-like bristles.

Mymaridae, commonly known as fairyflies, are one of about 18 families of chalcid wasps.

They are found across the world, except in Antarctica, but are so minute that they are seldomly noticed by humans.

Their apparent invisibility and delicate wings with long fringes resembling the mythical fairies have earned them their common name.

Fairyfly species include the world's smallest known winged insect, Kikiki huna, which has a body length of only 155 micrometres.

The smallest known adult insect is also a fairyfly. The wingless male of Dicopomorpha echmepterygis is only 130 micrometres.

Fairyflies live off the eggs and larvae of other insects.

The eggs are commonly laid in concealed locations, such as in plant tissues or in leaf litter or soil.

John Huber, from Natural Resources Canada, was the lead author of A New Genus And Species of Fairyfly, Tinkerbella Nana published in the Journal of Hymenoptera Research.

He said: 'If something is physically possible in living things, some individuals of at least one species, extinct or extant, will likely have achieved it.

'So the lower size limit, by whatever measure of size is chosen, was almost certainly already evolved—somewhere, sometime.

'If we have not already found them, we must surely be close to discovering the smallest insects and other arthropods.'


The Tinkerbella species were collected at the biological station owned and managed by the Organization for Tropical Studies.

Little is known about the lifecycle of fairyflies.

They have smaller wing surfaces and relatively long setae - or bristles - are believed to have an aerodynamic function.

It may be to reduce turbulence on wings flapping at several hundred beats per second.



And two related items on Peter Pan and Tinker Bell...

The cocoon existence of Peter Pan

Tinkerbell, Wendy, Sappho

Human empathy for robots, cyborgs, humanoids?


"Humans feel empathy for robots"

April 28th, 2013

SpaceDaily

 From the T-101 to Data from Star Trek, humans have been presented with the fictional dilemma of how we empathize with robots. Robots now infiltrate our lives, toys like Furbies or robot vacuum cleaners bring us closer, but how do we really feel about these non-sentient objects on a human level? A recent study by researchers at the University of Duisburg Essen in Germany found that humans have similar brain function when shown images of affection and violence being inflicted on robots and humans.

Astrid Rosenthal-von der Putten, Nicole Kramer, and Matthias Brand of the University of Duisburg Essen, will present their findings at the 63rd Annual International Communication Association conference in London. Rosenthal-von der Putten, Kramer and Brand conducted two studies.

In the first study, 40 participants watched videos of a small dinosaur-shaped robot that was treated in an affectionate or a violent way and measured their level of physiological arousal and asked for their emotional state directly after the videos. Participants reported to feel more negative watching the robot being abused and showed higher arousal during the negative video.

The second study conducted in collaboration with the Erwin L. Hahn Institute for Magnetic Resonance Imaging in Essen, used functional magnetic-resonance imaging (fMRI), to investigate potential brain correlations of human-robot interaction in contrast to human-human interaction.

The 14 participants were presented videos showing a human, a robot and an inanimate object, again being treated in either an affectionate or in a violent way.

Affectionate interaction towards both, the robot and the human, resulted in similar neural activation patterns in classic limbic structures, indicating that they elicit similar emotional reactions. However, when comparing only the videos showing abusive behavior differences in neural activity suggested that participants show more negative empathetic concern for the human in the abuse condition.

A great deal of research in the field of human-robot interaction concentrates on the implementation of emotion models in robotic systems. These studies test implementations with regard to their believability and naturalness, their positive influence on participants, or enjoyment of the interaction.

But there is little known on how people perceive "robotic" emotion and whether they react emotionally towards robots. People often have problems verbalizing their emotional state or find it strange to report on their emotions in human-robot interactions. Rosenthal-von der Putten and Kramer's study utilized more objective measures linked to emotion like physiological arousal and brain activity associated with emotional processing.

"One goal of current robotics research is to develop robotic companions that establish a long-term relationship with a human user, because robot companions can be useful and beneficial tools. They could assist elderly people in daily tasks and enable them to live longer autonomously in their homes, help disabled people in their environments, or keep patients engaged during the rehabilitation process,"
said Rosenthal-von der Putten.

"A common problem is that a new technology is exciting at the beginning, but this effect wears off especially when it comes to tasks like boring and repetitive exercise in rehabilitation. The development and implementation of uniquely humanlike abilities in robots like theory of mind, emotion and empathy is considered to have the potential to solve this dilemma."

"Investigation on Empathy Towards Humans and Robots Using Psychophysiological Measures and fMRI," by Astrid Rosenthal-von der Putten and Nicole Kramer; To be presented at the 63rd Annual International Communication Association Conference, London, England 17-21 June.


Would you feel badly if her arm fell off?




That Ph. D.


I burned out after one year of the Ph. D. program despite my desire to continue and do research and teach. But then, that was a long time ago and things have changed.

"The Impossible Decision"

by

Joshua Rothman

April 23rd, 2013

The New Yorker

Graduate students are always thinking about the pleasures and travails of grad school, and springtime is a period of especially intense reflection. It’s in the spring, often in March and April, that undergraduates receive their acceptance letters. When that happens, they turn to their teachers, many of them graduate students, for advice. They ask the dreaded, complicated, inevitable question: To go, or not to go?

Answering that question is not easy. For graduate students, being consulted about grad school is a little like starring in one of those “Up” documentaries (“28 Up,” ideally; “35 Up,” in some cases). Your students do the work of Michael Apted, the series’s laconic director, asking all sorts of tough, personal questions. They push you to think about the success and failure of your life projects; to decide whether or not you are happy; to guess what the future holds; to consider your life on a decades-long scale. This particular spring, the whole conversation has been enriched by writers from around the Web, who have weighed in on the pros and cons of graduate school, especially in the humanities. In addition to the usual terrifying articles in the advice section of the Chronicle of Higher Education, a pair of pieces in Slate—  “Thesis Hatement,” by Rebecca Schuman, and  “Thesis Defense” by Katie Roiphe—have sparked many thoughtful responses from bloggers and journalists. It’s as though a virtual symposium has been convened.

I’m a former humanities graduate student myself—I went to grad school in English from 2003 through 2011 before becoming a journalist, and am still working nights on my dissertation—and I’m impressed by the clarity of the opinions these essays express. (Rebecca Schuman: “Don’t do it. Just don’t”; Katie Roiphe: “It gives you a habit of intellectual isolation that is… useful, bracing, that gives you strength and originality.”) I can’t muster up that clarity myself, though. I’m very glad that I went to graduate school—my life would be different, and definitely worse, without it. But when I’m asked to give students advice about what they should do, I’m stumped. Over time, I’ve come to feel that giving good advice about graduate school is impossible. It’s like giving people advice about whether they should have children, or move to New York, or join the Army, or go to seminary.

Maybe I’ve been in school too long; doctoral study has a way of turning your head into a never-ending seminar, and I’m now capable of having complicated, inconclusive thoughts about nearly any subject. But advice helps people when they are making rational decisions, and the decision to go to grad school in English is essentially irrational. In fact, it’s representative of a whole class of decisions that bring you face to face with the basic unknowability and uncertainty of life.

To begin with, the grad-school decision is hard in all sorts of perfectly ordinary ways. One of them is sample bias. If you’re an undergrad, then most of the grad students you know are hopeful about their careers, and all of the professors you know are successful; it’s a biased sample. Read  the harrowing collection of letters from current and former grad students published in the Chronicle, and you encounter the same problem: the letters are written by the kinds of people who read the Chronicle, in response to an article about the horrors of grad school. They, too, are writing out of their personal experiences. It’s pretty much impossible to get an impartial opinion.

Last week, one of my college friends, who now manages vast sums at a hedge fund, visited me. He’s the most rational person I know, so I asked him how he would go about deciding whether to go to grad school in a discipline like English or comparative literature. He dealt immediately with the sample bias problem by turning toward statistics. His first step, he said, would be to ignore the stories of individual grad students, both good and bad. Their experiences are too variable and path-dependent, and their stories are too likely to assume an unwarranted weight in our minds. Instead, he said, he would focus on the “base rates”: that is, on the numbers that give you a broad statistical picture of outcomes from graduate school in the humanities. What percentage of graduate students end up with tenure? (About one in four.) How much more unhappy are graduate students than other people? (About fifty-four per cent of graduate students report feeling so depressed they have “a hard time functioning,” as opposed to ten per cent of the general population.) To make a rational decision, he told me, you have to see the big picture, because your experience is likely to be typical, rather than exceptional. “If you take a broader view of the profession,” he told me, “it seems like a terrible idea to go to graduate school.”

Perhaps that’s the rational conclusion, but, if so, it’s beset on all sides by confounding little puzzles; they act like streams that divert and weaken the river of rational thought. Graduate school, for example, is a one-time-only offer. Very few people start doctoral programs later in life. If you pass it up, you pass it up forever. Given that, isn’t walking away actually the rash decision? (This kind of thinking is a subspecies of the habit of mind psychologists call loss aversion: once you have something, it’s very hard to give it up; if you get into grad school, it’s very hard not to go.) And then there’s the fact that graduate school, no matter how bad an idea it might be in the long term, is almost always fulfilling and worthwhile in the short term. As our conversation continued, my friend was struck by this. “How many people get paid to read what they want to read,” he asked, “and study what they want to study?” He paused. ”If I got into a really good program, I would probably go.”

Thinking about grad school this way is confusing, but it’s confusing in a mundane, dependable way; you’re still thinking about pros and cons, about arguments for and against a course of action. Continue to think about grad school, though, and you’ll enter the realm of the simply unknowable. The conflicting reports you’ll hear from different graduate students speak to the difficulty, perhaps even the impossibility, of judging lengthy experiences. What does it mean to say that a decade of your life is good or bad? That it was worthwhile, or a waste of time? Barring some Proustian effort of recollection, a long period of years, with its vast range of experiences and incidents, simply can’t be judged all at once. The best we can do is use what psychologists call “heuristics”: mental shortcuts that help us draw conclusions quickly.

One of the more well-understood heuristics is called the “peak-end rule.” We tend to judge long experiences (vacations, say) by averaging, more or less, the most intense moment and the end. So a grad student’s account of grad school might not be truly representative of what went on; it might merely combine the best (or worst) with how it all turned out. The most wonderful students will be averaged with the grind of the dissertation; that glorious summer spent reading Kant will be balanced against the horrors of the job market. Essentially, peak-end is an algorithm; it grades graduate school in the same way a software program grades an essay. Sure, a judgment is produced, but it’s only meaningful in a vague, approximate way. At the same time, it raises an important conceptual question: What makes an experience worthwhile? Is it the quality of the experience as it’s happening, or as it’s remembered? Could the stress and anxiety of grad school fade, leaving only the learning behind? (One hopes that the opposite won’t happen.) Perhaps one might say of graduate school what Aeneas said of his struggles: “A joy it will be one day, perhaps, to remember even this.” Today’s unhappiness might be forgotten later, or judged enriching in other ways.

This kind of thinking, in turn, makes you wonder about the larger purpose of graduate school in the humanities—about the role it assumes in one’s life. To some degree, going to graduate school is a career decision. But it’s also a life decision. It may be, therefore, that even older graduate students are too young to offer their opinions on graduate school. Ten years is a long time, but it’s still only part of a whole. The value of grad school hinges, to a large extent, on what comes next. The fact that what comes next is, increasingly, unclear—that many graduate students don’t go into academia, but pursue other jobs—might only mean that a greater proportion of the value of graduate school must be revealed with time. Grad school might be best understood as what George Eliot, at the end of “Middlemarch,” calls a “fragment of a life,” and


    the fragment of a life, however typical, is not the sample of an even web: promises may not be kept, and an ardent outset may be followed by declension; latent powers may find their long-waited opportunity; a past error may urge a grand retrieval.

You never know how things will turn out. Experiences accrued in one currency can be changed into another. Ambition today can fund tranquility tomorrow; fear today can be a comfort later on. Or the reverse.

The breadth of grad school, in other words—the sheer number of years it encompasses—makes it hard to think about. But, finally, it’s challenging because of its depth, too. Grad school is a life-changing commitment: less like taking a new job and more like moving, for the entirety of your twenties, to a new country. (That’s true, I think, even for undergraduates: grad school is different from college.) Grad school will shape your schedule, your interests, your reading, your values, your friends. Ultimately, it will shape your identity. That makes it difficult to know, in advance, whether you’ll thrive, and difficult to say, afterward, what you would have been like without it.

The philosopher L. A. Paul, who teaches at the University of North Carolina at Chapel Hill, describes these sorts of big life decisions eloquently in a forthcoming paper; she calls them “epistemically transformative” decisions. Sometimes, you can’t know what something is like until you try it. You can’t know what Vegemite tastes like, for example, until you try Vegemite; you can’t know what having children will be like until you have children. You can guess what these things will be like; you can ask people; you can draw up lists of pros and cons; but, at the end of the day, “without having the experience itself” you “cannot even have an approximate idea as to what it is like to have that experience.” That’s because you won’t just be having the experience; the experience will be changing you. On the other side, you will be a different kind of person. Making such a decision, you will always be uninformed.

We don’t, Paul writes, really have a good way to talk about these kinds of life-changing decisions, but we still make them. It’s hard to say how, exactly, we do it. All she can say is that, in order to make them, we have to do something a little crazy; we have to cast aside “the modern upper middle class conception of self-realization [that] involves the notion that one achieves a kind of maximal self-fulfillment through making rational choices about the sort of person one wants to be.” From this point of view, when you contemplate grad school, you’re like Marlow, in “Heart of Darkness,” when he is travelling up-river to find Kurtz. “Watching a coast as it slips by the ship,” Conrad writes,


    is like thinking about an enigma. There it is before you—smiling, frowning, inviting, grand, mean, insipid, or savage, and always mute with an air of whispering, ‘Come and find out.’

We make these decisions, I suspect, not because we’re rational, but because we’re curious. We want to know. This seems especially true about graduate school. It’s designed, after all, for curious people—for people who like knowing things. They are exactly the people most likely to be drawn in by that whispered “Come and find out.”

* * *

In a narrow sense, of course, there’s nothing about these skeptical thoughts that should stop me from giving advice about graduate school. And when students ask me, I do have things to say. I point them to data, like the chart published in The Atlantic last week, which shows the declining reliance of universities on tenured faculty. And I tell my own story, which is overwhelmingly positive. I may not have finished (yet), and, like any grad student, I had my moments of panic. But I loved graduate school, and I miss it. In particular, I miss the conversations. Talking with my students, I found and expressed my best self. The office hours I spent in conversation with my professors stand out, even years later, as extraordinary experiences. I wish that everyone I know could have them, too.

But, talking to my students, I’m aware that there are too many unknowns. There are too many ways in which a person can be disappointed or fulfilled. It’s too unclear what happiness is. It’s too uncertain how the study of art, literature, and ideas fits into it all. (I’ve never forgotten the moment, in Saul Bellow’s “Herzog,” when Herzog thinks, “Much of my life has been spent in the effort to live by more coherent ideas. I even know which ones”; Herzog knows everything except how to live and do good. And yet what he knows is so extraordinary. As a grad student, I led a fascinating and, obviously, somewhat ironic discussion of that quote.) And, finally, life is too variable, and subject to too many influences. A person’s life, Eliot writes, also at the end of “Middlemarch,” is


    the mixed result of young and noble impulses struggling amidst the conditions of an imperfect social state, in which great feelings will often take the aspect of error, and great faith the aspect of illusion. For there is no creature whose inward being is so strong that it is not greatly determined by what lies outside it.

I’ll give advice about grad school if you ask me to, and I’m happy to share my experiences. But these bigger mysteries make the grad-school decision harder. They take a career conundrum and elevate it into an existential quandary. In the end, I feel just as ignorant as my curious, intelligent, inexperienced students. All I really want to say is, good luck.