Ambiguous/Dynamic

RSS
bunmer:

redhester:

bunmer:


A young Jewish refugee with her Chinese playmates. Shanghai, China (x)

Between 1933 and 1941, it is estimated that 20,000 Jews escaped persecution by fleeing to the Chinese port of Shanghai. Shanghai was one of the few places in the world that would accept Jewish refugees at this time, Japan being another.

i am furious that i am just now learning about this important fact.

Because it has nothing to do with the USA being the superhero and saving all the Jews

bunmer:

redhester:

bunmer:

A young Jewish refugee with her Chinese playmates. Shanghai, China (x)

Between 1933 and 1941, it is estimated that 20,000 Jews escaped persecution by fleeing to the Chinese port of Shanghai. Shanghai was one of the few places in the world that would accept Jewish refugees at this time, Japan being another.

i am furious that i am just now learning about this important fact.

Because it has nothing to do with the USA being the superhero and saving all the Jews

scifi-fantasy-horror:

by Dan Mora

scifi-fantasy-horror:

by Dan Mora

willlaren:

"Little Dog/Punk Alliance"

willlaren:

"Little Dog/Punk Alliance"

kissmyasajj:

fuckyeahwarriorwomen:

duckindolans:

daughterofmulan:

theblindninja:

The Pirates Official Posters

What is this glorious looking glory.

WHAT IS THIS

Pirates (2014 film)

Set in the early Joseon Dynasty, a group led by a female pirate and another group led by a male bandit are on a mission to hunt down a whale that swallowed the royal seal bestowed on Joseon from China.

Yes!

whippit-princess:

lasso:



Guys seriously would you LOOK at mini Adam Scott from Boy Meets World circa 1994



was this when he was mayor

whippit-princess:

lasso:

Guys seriously would you LOOK at mini Adam Scott from Boy Meets World circa 1994

was this when he was mayor

artissimo:

by couscous teamThe Art of Brom

artissimo:

by couscous team

The Art of Brom

visualreverence:

sadgas-art's AUDI A4 ROBOTS COMMERCIAL

mindblowingscience:

Ethical trap: robot paralysed by choice of who to save

Can a robot learn right from wrong? Attempts to imbue robots, self-driving cars and military machines with a sense of ethics reveal just how hard this is
CAN we teach a robot to be good? Fascinated by the idea, roboticist Alan Winfield of Bristol Robotics Laboratory in the UK built an ethical trap for a robot – and was stunned by the machine’s response.
In an experiment, Winfield and his colleagues programmed a robot to prevent other automatons – acting as proxies for humans – from falling into a hole. This is a simplified version of Isaac Asimov’s fictional First Law of Robotics – a robot must not allow a human being to come to harm.
At first, the robot was successful in its task. As a human proxy moved towards the hole, the robot rushed in to push it out of the path of danger. But when the team added a second human proxy rolling toward the hole at the same time, the robot was forced to choose. Sometimes, it managed to save one human while letting the other perish; a few times it even managed to save both. But in 14 out of 33 trials, the robot wasted so much time fretting over its decision that both humans fell into the hole. The work was presented on 2 September at the Towards Autonomous Robotic Systems meeting in Birmingham, UK.
Winfield describes his robot as an “ethical zombie” that has no choice but to behave as it does. Though it may save others according to a programmed code of conduct, it doesn’t understand the reasoning behind its actions. Winfield admits he once thought it was not possible for a robot to make ethical choices for itself. Today, he says, “my answer is: I have no idea”.
As robots integrate further into our everyday lives, this question will need to be answered. A self-driving car, for example, may one day have to weigh the safety of its passengers against the risk of harming other motorists or pedestrians. It may be very difficult to program robots with rules for such encounters.
But robots designed for military combat may offer the beginning of a solution. Ronald Arkin, a computer scientist at Georgia Institute of Technology in Atlanta, has built a set of algorithms for military robots – dubbed an “ethical governor” – which is meant to help them make smart decisions on the battlefield. He has already tested it in simulated combat, showing that drones with such programming can choose not to shoot, or try to minimise casualties during a battle near an area protected from combat according to the rules of war, like a school or hospital.
Arkin says that designing military robots to act more ethically may be low-hanging fruit, as these rules are well known. “The laws of war have been thought about for thousands of years and are encoded in treaties.” Unlike human fighters, who can be swayed by emotion and break these rules, automatons would not.
"When we’re talking about ethics, all of this is largely about robots that are developed to function in pretty prescribed spaces," says Wendell Wallach, author ofMoral Machines: Teaching robots right from wrong. Still, he says, experiments like Winfield’s hold promise in laying the foundations on which more complex ethical behaviour can be built. “If we can get them to function well in environments when we don’t know exactly all the circumstances they’ll encounter, that’s going to open up vast new applications for their use.”
This article appeared in print under the headline “The robot’s dilemma”

Watch a video of these ‘ethical’ robots in action here

mindblowingscience:

Ethical trap: robot paralysed by choice of who to save

Can a robot learn right from wrong? Attempts to imbue robots, self-driving cars and military machines with a sense of ethics reveal just how hard this is

CAN we teach a robot to be good? Fascinated by the idea, roboticist Alan Winfield of Bristol Robotics Laboratory in the UK built an ethical trap for a robot – and was stunned by the machine’s response.

In an experiment, Winfield and his colleagues programmed a robot to prevent other automatons – acting as proxies for humans – from falling into a hole. This is a simplified version of Isaac Asimov’s fictional First Law of Robotics – a robot must not allow a human being to come to harm.

At first, the robot was successful in its task. As a human proxy moved towards the hole, the robot rushed in to push it out of the path of danger. But when the team added a second human proxy rolling toward the hole at the same time, the robot was forced to choose. Sometimes, it managed to save one human while letting the other perish; a few times it even managed to save both. But in 14 out of 33 trials, the robot wasted so much time fretting over its decision that both humans fell into the hole. The work was presented on 2 September at the Towards Autonomous Robotic Systems meeting in Birmingham, UK.

Winfield describes his robot as an “ethical zombie” that has no choice but to behave as it does. Though it may save others according to a programmed code of conduct, it doesn’t understand the reasoning behind its actions. Winfield admits he once thought it was not possible for a robot to make ethical choices for itself. Today, he says, “my answer is: I have no idea”.

As robots integrate further into our everyday lives, this question will need to be answered. A self-driving car, for example, may one day have to weigh the safety of its passengers against the risk of harming other motorists or pedestrians. It may be very difficult to program robots with rules for such encounters.

But robots designed for military combat may offer the beginning of a solution. Ronald Arkin, a computer scientist at Georgia Institute of Technology in Atlanta, has built a set of algorithms for military robots – dubbed an “ethical governor” – which is meant to help them make smart decisions on the battlefield. He has already tested it in simulated combat, showing that drones with such programming can choose not to shoot, or try to minimise casualties during a battle near an area protected from combat according to the rules of war, like a school or hospital.

Arkin says that designing military robots to act more ethically may be low-hanging fruit, as these rules are well known. “The laws of war have been thought about for thousands of years and are encoded in treaties.” Unlike human fighters, who can be swayed by emotion and break these rules, automatons would not.

"When we’re talking about ethics, all of this is largely about robots that are developed to function in pretty prescribed spaces," says Wendell Wallach, author ofMoral Machines: Teaching robots right from wrong. Still, he says, experiments like Winfield’s hold promise in laying the foundations on which more complex ethical behaviour can be built. “If we can get them to function well in environments when we don’t know exactly all the circumstances they’ll encounter, that’s going to open up vast new applications for their use.”

This article appeared in print under the headline “The robot’s dilemma”

Watch a video of these ‘ethical’ robots in action here

spacetwinks:

i just found out that the guy who played jimmy lee in the inexplicable and practically incoherent 90s double dragon movie is the same guy who hosted iron chef america and i’m some weird out of body feeling about it

sassy-hook:

pleasant-trees:

aprilsvigil:

manticoreimaginary:

Watching this (and fearing broken ankles with each loop) I can’t helping thinking about that old quote Ginger Rogers did everything Fred Astaire did, except backwards and in high heels.

But no, if you watch closely you’ll see she doesn’t even step on the last chair. That means she had to trust that fucker to lift her gently to the ground while he was spinning down onto that chair. That takes major guts. I’d be pissing myself and fearing a broken neck if I were in her place. Kudos to her. 

I can’t stop watching this. 

#I watched this for too long to not reblog

sassy-hook:

pleasant-trees:

aprilsvigil:

manticoreimaginary:

Watching this (and fearing broken ankles with each loop) I can’t helping thinking about that old quote Ginger Rogers did everything Fred Astaire did, except backwards and in high heels.

But no, if you watch closely you’ll see she doesn’t even step on the last chair. That means she had to trust that fucker to lift her gently to the ground while he was spinning down onto that chair. That takes major guts. I’d be pissing myself and fearing a broken neck if I were in her place. Kudos to her. 

I can’t stop watching this. 

(Source: ohrobbybaby)