WorldCon 2016: Alienbuilding

Disclaimer: I am not perfect and neither are my notes. If you notice anything that requires clarification or correction, please email me at melanie (dot) marttila (at) gmail (dot) com and I will fix things post-hasty.

Alienbuilding

Panellists: Caroline M. Yoachim (moderator), G. David Nordley, Ctein, Larry Niven, Sheila Finch

Joined in progress …

GDN: To build aliens, you have to start with the system, planets, and so on down.

C: When it comes to the aliens themselves, a top-down approach means psychology first.

LN: I’ve created aliens with handles on the skull. Humans have bilateral symmetry on the outside. Inside, not so much. An alien can have two dominant arms for fine manipulation, or one extra-muscular arm for heavy lifting. Why not a dwarf elephant with two trunks and fingers on the trunk-tips?

SF: It happens all at once for me. I have an image of the alien. I take a step back and consider what environment might have produced it. Then, I develop the psychology and language. The metaphors used are linked to physiology.

C: I’m happy to steal if it works. I have a species I based on puppets.

CMY: Do you have to balance strangeness with relatability?

GDN: I’m not bothered by aliens that have commonalities with humans. Our basic drives are all the same.

C: There are special, species-related characteristics. Will aliens have religion? Will they be acquisitive? Are they into body augmentation?

SF: Corvids are acquisitive.

LN: I ask myself, what’s the weirdest thing about an alien? Then I extrapolate back.

SF: Sentience and self-awareness have been proven to exist in animals.

C: One notable characteristic of humans is that we build. If there’s an advanced species out there that doesn’t build, what do they do?

LN: What’s the process of adapting humans to their environments?

CMY: What pitfalls do you see? What are your pet peeves?

GDN: Characters that don’t have survival value.

LN: There was a story based on a hospital station—everyone got sick. [Mel’s note: not every disease will attack every species by the same vector. Zoonosis is not common on Earth. And then, there’s immunity.]

SF: Plant aliens that aren’t done well. Sequoias, for example, would have a chemical intelligence.

C: When the physical worldbuilding isn’t related to the story. If it’s all about the display of worldbuilding prowess, it’s essentially scenery.

CMY: When all the aliens are the same, are they truly “alien” aliens?

GDN: Silicone and oxygen might be able to produce something similar to DNA and RNA. Truly alien aliens are difficult to figure out physiologically and biologically.

SF: With truly alien aliens, their physiology becomes the story. It’s all about explaining how they function.

And that was time.

I’ll have one more WorldCon 2016 session to share with you this month, and it’s more worldbuilding (are you sensing a theme?). Next weekend: Political worldbuilding in science fiction.

Be well, be kind, and stay strong until next I blog.

WorldCon 2016: Humans and robots

Disclaimer: I am not perfect and neither are my notes. If you notice anything that requires clarification or correction, please email me at melanie (dot) marttila (at) gmail (dot) com and I will fix things post-hasty.

Panellists: Kevin Roche, G. David Nordley, Brenda Cooper (moderator), Walt Boyes, Jerry Pournelle

humansandrobots

Joined in progress …

KR: They have built and programmed competent bartending robots.

GDN: There’s an S-curve with any technological development. If you picture the letter S and start from the bottom of the letter, robotics is at the first upsweeping curve.

WB: Google is the largest robotics company in the world. Boston Robotics sells services in robotic hours.

JP: With regard to artificial intelligence (AI), every time we started something that looked like AI, people said nope, that ain’t it. Unemployment is higher than the statistics report. In the near future, over half of jobs will be replaced by robots or other automation. The unemployable won’t be visible. They won’t be looking. We’ve not lost jobs to overseas corporations, or not as many as we think. We’ve lost jobs to automation. The “useless” class is on the rise. Look at it this way, an employer saves an employee’s annual salary and spends maybe 10% of it to maintain a robot doing the same work. They’d need one human to service 20 robots.

BC: How do you assign value to human work?

WB: In 1900, the second industrial revolution saw farm workers move to the cities and the factories. The real issue is a philosophical one. We’ve been assigning value to people by the work that they do. A corporate lawyer has, subjectively, greater value than a garbage man. What happens when automation and artificial intelligence replaces both?

KR: When workers are underpaid, the social contract bears the cost. Increasing the minimum wage and increased automation are exposing the dirty little secret. People need to be valued differently. Teachers and artists, in particular, can’t be replaced.

GDN: The top level docs of our society assign value to every citizen. The big question is how do we realize that? The recession has meant fewer tax dollars dedicated to the arts and infrastructure. We have to have the social conversation.

JP: Will advances in artificial intelligence implement Asimov’s three laws? Drones don’t use the three laws. IBM created an AI that beat a human at go [the game]. They took two machines, programmed them with the rules of the game, and let them play each other. After ten million games, they could functionally beat anyone. If you ask a robot to stop humans from killing each other, what’s to stop the robot from coming up with the solution to kill all humans? We have to proceed carefully.

KR: Watson won Jeopardy. Its job is to parse huge amounts of information and look for patterns. It’s humans who decided to test the system by putting it on the show.

GDN: Right now, computers are still, by and large, working on bookkeeping tasks. As we get to the point where we have to consider the three laws, we have to be cautious.

WB: We have to expand out definition of robotics. We have the internet of things with programmable thermostats and refrigerators we can access through our phones. Though still imperfect, we have self-driving cars. We need to figure out how to program morality.

GDN: Human beings don’t consistently make the same moral choices. Fuzzy logic and data sets would be required. Positronic brains would have to deal with potentialities.

KR: We don’t have an algorithmic equivalent for empathy.

And that was time.

Next week, we’re going to explore the steampunk explosion 🙂

Until then, be well, be kind, and be awesome!