PicoBlog

Emilia Javorsky, Future of Life Institute

In this Hope Drop, we welcome Emilia Javorsky, a biomedical scientist, physician, entrepreneur, and the current Director of the Futures Program at the Future of Life Institute.

We explore Emilia's insights into AI's role in global challenges, her drive to balance optimism with realism, and her exploration into the uncharted territory of AI's interplay with biology. 

Listen Here

Emilia envisions a future where a wide range of human talent joins forces with artificial intelligence to tackle global challenges. This isn't just about the speed of AI advancements but how they harmonise with human goals.

She stresses the importance of creating positive narratives, ones that integrate AI with genuine human empathy – pointing towards a world where technology complements, not replaces, our connections. To get here, however, she emphasises the need for thoughtful regulation based in the real world, and promotes an inclusive, multi-stakeholder approach.

On flourishing futures, Emilia believes AI could better human health, revolutionise bioengineering, and aid us in space exploration, all the while enhancing human connection. For her, progress is more than mitigating risk – it's about unlocking the vast potential that AI and human collaboration promise.  

About the artist
Philipp Lenssen created this art piece with the help of generative AI. Philipp is from Germany and has been exploring technology and art for all his life. He developed sandbox universes Manyland and wrote a technology blog for 7 years. He's currently working on new daily pictures at Instagram.com/PhilippLenssen

Discover More

Discover the x-hope Library

Don't miss your opportunity to be a part of one of our Vision Weekends, our annual member festivals celebrated in two countries over two weekends. Engage with top thinkers in biotechnology, nanotechnology, neurotechnology, computing, and space exploration. Break out of your tech silos and plan for a prosperous, long-term future.

Vision Weekend France: 17-19 November
Vision Weekend USA: 1-3 December

Review the weekend agendas and confirmed participants here.

Discover Here

Principles of Intelligent Behavior in Biological and Social Systems (PIBBSS) is a research initiative facilitating work that draws on the parallels between intelligent behavior in natural and artificial systems and leveraging these insights towards making AI systems safe, beneficial and aligned.

They seek to support excellent researchers pursuing “PIBBSS-style” AI alignment research by providing them with tailored, longer-term support. 

Deadline: November 5th 2023! 
Provided: A full-time salary, a research community, and operational support.
Timeline: 6 months, with potential extensions to a year or more.

Anders Sandberg on war in space, whether civilisations age, and the best things possible in our universe – 80,000 Hours Podcast 

Among other things, this podcast covers: 

  • Whether there's a best possible world or we can just keep improving forever

  • What wars might look like if the galaxy is mostly settled

  • The impediments to AI or humans making it to other stars

  • How the universe will end a million trillion years in the future

  • Whether it’s useful to wonder about whether we’re living in a simulation

  • The grabby aliens theory

  • Whether civilizations get more likely to fail the older they get

  • The best way to generate energy that could ever exist

  • The likelihood that life from elsewhere has already visited Earth

Existential Risk and Rapid Technological Change – The Simon Institute for the UNDRP

  • This paper explores the critical intersection of existential risk and emerging technologies, such as biotechnology and artificial intelligence, within the Sendai Framework.

  • As the pace of technological advancement outstrips risk governance, the need for reform becomes evident.

  • To address these challenges, the UN must foster a common understanding of existential risk, strengthen governance, allocate more resources, and establish swift response mechanisms.

  • The proposed international coordination mechanism and inclusion of high-impact risks in funding instruments herald a safer world for generations to come.

Artificial Intelligence, Morality, and Sentience – The Simon Institute

  • Sentience Institute recently released the 2023 results of their Artificial Intelligence, Morality, and Sentience survey

  • Americans are significantly more concerned about AI in 2023 than they were in 2021, before ChatGPT. 

  • 71% support government regulation that slows AI development

  • 68% agreed that we must not cause unnecessary suffering to large language models (LLMs), such as ChatGPT or Bard, if they develop the capacity to suffer.

  • 20% of people think that some AIs are already sentient, 37% are not sure, and 43% say they are not.

  • Please see further results here

ncG1vNJzZmien6eytLXGoauipqOptrXB055lrK2SqMGir8pnmqilX6V8psTIrKuepqSerq15x6innmWUp7yxeZBxZJ6lmaG2onnJmq2oqqOgxg%3D%3D

Almeda Bohannan

Update: 2024-12-03