WorldCon 2016: Generation starships

Disclaimer: I am not perfect and neither are my notes. If you notice anything that requires clarification or correction, please email me at melanie (dot) marttila (at) gmail (dot) com and I will fix things post-hasty.

genstarships

Panellists: Pat Cadigan, Gregory Benford, Mark W. Tiedemann, Brenda Cooper (moderator), Jerry Pournelle

Joined in progress …

GB: We can work out the engineering problems. The people problems, we can’t.

JP: We have to have some form of artificial gravity. Currently, interstellar travel can only be accomplished by accelerating half way and then decelerating the other half. The Fermi paradox says there might be one civilization, not planet, not planet with some form of life, but one civilization, per galaxy.

PC: People choose to live in habitats orbiting Earth. They don’t have artificial gravity. The solution could be epigenetics. Adapt the body to life in space. Once you pass a few generations, the privations become irrelevant. Then we have to face the challenges of exploration and colonization of new worlds. We’ve faced some of these problems before. The prairie skies produced agoraphobia. When the generation ships land, people will be totally freaked. We’ll need to regulate space and noise.

BC: There was a 100 year starship symposium at which it was posited that generation ships would have to have a military-like social structure.

MWT: I don’t see why we’d want to do that. It would work, but not without the benefits that make such a system worth it.

GB: That might be the wrong analog. If you have a pool, you need a lifeguard. The army has a purpose in the larger community. A genration ship is a community.

JP: The Melanesians who settled Hawaii knew they were going on a one way trip. A worker who works, lives, and never leaves Manhattan might as well be on a colony.

PC: If we have habitations around Saturn, it’s too far away for help to get there in the case on an emergency. It would have to be a regimented society. They would have to constantly be checking their equations, their plans. They would never want to be doing something for the first time.

MWT: The personalities of the volunteers will influence what happens on the ship, and in the colony.

BC: What would people on the ship do for fun?

GB: What does anyone do? Sex, drugs, and rock ‘n’ roll.

PC: Even the frivolous pursuits would have to be engineered.

MWT: I think virtual reality would be a major component.

BC: How can you teach generation after generation order and discipline and then expect innovation and creativity to emerge at the destination?

JP: That’s what novelists are for.

And that was time.

Next week: The dark side of fairy tales 🙂

Thanks for stopping by. Hope you found something of interest or entertainment.

Be well until next I blog.

WorldCon 2016: Humans and robots

Disclaimer: I am not perfect and neither are my notes. If you notice anything that requires clarification or correction, please email me at melanie (dot) marttila (at) gmail (dot) com and I will fix things post-hasty.

Panellists: Kevin Roche, G. David Nordley, Brenda Cooper (moderator), Walt Boyes, Jerry Pournelle

humansandrobots

Joined in progress …

KR: They have built and programmed competent bartending robots.

GDN: There’s an S-curve with any technological development. If you picture the letter S and start from the bottom of the letter, robotics is at the first upsweeping curve.

WB: Google is the largest robotics company in the world. Boston Robotics sells services in robotic hours.

JP: With regard to artificial intelligence (AI), every time we started something that looked like AI, people said nope, that ain’t it. Unemployment is higher than the statistics report. In the near future, over half of jobs will be replaced by robots or other automation. The unemployable won’t be visible. They won’t be looking. We’ve not lost jobs to overseas corporations, or not as many as we think. We’ve lost jobs to automation. The “useless” class is on the rise. Look at it this way, an employer saves an employee’s annual salary and spends maybe 10% of it to maintain a robot doing the same work. They’d need one human to service 20 robots.

BC: How do you assign value to human work?

WB: In 1900, the second industrial revolution saw farm workers move to the cities and the factories. The real issue is a philosophical one. We’ve been assigning value to people by the work that they do. A corporate lawyer has, subjectively, greater value than a garbage man. What happens when automation and artificial intelligence replaces both?

KR: When workers are underpaid, the social contract bears the cost. Increasing the minimum wage and increased automation are exposing the dirty little secret. People need to be valued differently. Teachers and artists, in particular, can’t be replaced.

GDN: The top level docs of our society assign value to every citizen. The big question is how do we realize that? The recession has meant fewer tax dollars dedicated to the arts and infrastructure. We have to have the social conversation.

JP: Will advances in artificial intelligence implement Asimov’s three laws? Drones don’t use the three laws. IBM created an AI that beat a human at go [the game]. They took two machines, programmed them with the rules of the game, and let them play each other. After ten million games, they could functionally beat anyone. If you ask a robot to stop humans from killing each other, what’s to stop the robot from coming up with the solution to kill all humans? We have to proceed carefully.

KR: Watson won Jeopardy. Its job is to parse huge amounts of information and look for patterns. It’s humans who decided to test the system by putting it on the show.

GDN: Right now, computers are still, by and large, working on bookkeeping tasks. As we get to the point where we have to consider the three laws, we have to be cautious.

WB: We have to expand out definition of robotics. We have the internet of things with programmable thermostats and refrigerators we can access through our phones. Though still imperfect, we have self-driving cars. We need to figure out how to program morality.

GDN: Human beings don’t consistently make the same moral choices. Fuzzy logic and data sets would be required. Positronic brains would have to deal with potentialities.

KR: We don’t have an algorithmic equivalent for empathy.

And that was time.

Next week, we’re going to explore the steampunk explosion 🙂

Until then, be well, be kind, and be awesome!