Alife News

The Artificial Life Community Newsletter

Artificial Life Newsletter 009 -- [micro] organisms | cosms | scale

A word from the team

Happy 2023, and welcome to the 9th issue of the Alife Newsletter!

For the first edition of 2023, we have some technical news: The newsletter now has an RSS feed! If you have an RSS reader, you can point it to this link to receive announcements of new editions of the newsletter. Give it a try!

The theme for this edition is "[micro] organisms | cosms | scale". For micro-organisms, we report on a novel idea for an wearable device that integrates living mold with electronics. For micro-cosms, we link a review of small programs that can generate complex worlds. And for micro-scale, we found a delightful art imagining the origins of life.

In adition, we have a paper review about life as information, and the syllabus of a new artificial life course, as well as CFP deadlines and announcements of PhD positions! We hope you find something that pique your interest.

As usual, we welcome contributions, ideas and suggestions to the newsletter at this form! In particular, we are specially interested in Master and PhD students who want to talk about their own research ideas. Do send us a line!

If you'd like to receive our news, you can subscribe by e-mail here, or by RSS here.

Lana, Imy, Mitsuyoshi, Claus and Katt.

Paper: “Integrating Living Organisms in Devices to Implement Care-based Interactions”

By Jasmine Lu and Pedro Lopes (University of Chicago's Human Computer Integration Lab)

Slime mold watch

In this paper, we explore how embedding a living organism (in this case a slime mold, Physarum Polycephalum) as a functional component of a device, changes the user-device relationship. In our design, the user needs to care for the living organism (by providing food and water) in order for the device to work. When healthy, the organism participates in the device’s functionality by acting as a physical wire that enables power to the watch’s heart rate sensor. As such, caring for the device is intrinsic to its interaction design —with the user’s care, the slime mold becomes conductive and enables the sensor; conversely, without care, the slime mold dries and disables the sensor, and resuming care resuscitates the slime mold.

In addition to engineering this device, we also conducted a user study where participants wore our slime mold-integrated smartwatch for 9-14 days. We found that participants developed a unique connection towards their slime mold-integrated device, with many feeling a sense of responsibility and/or reciprocity.

Rather than a user-device relationship built on extractive use, our approach explores how devices can be designed to encourage the user to take on a caretaking role. We're excited about how our approach might foster new discussions about how we might rethink the user-device relationship.

How it works

Read our paper or watch the project video

ALife Paper Review: The World as Evolving Information

https://arxiv.org/abs/0704.0304

Paper review by Michael Crosscombe

I recently transitioned into Artificial Life research and I have been digging through the digital archives to familiarise myself with some of the exciting ideas in the field. Given my outsider’s perspective, I find that I am most drawn to papers which offer a novel way of thinking about the world and, in doing so, provide a different lens through which we can approach the ongoing problems of the field.

One paper in particular is that of “The World as Evolving Information” by Carlos Gershenson. The author proposes that we adopt metaphors as our common language and then goes on to describe the world at different scales – not in terms of energy and matter, but in terms of information! Beginning simply, information is defined as “anything that an agent can sense, perceive or observe.” After additional notions to define an agent and its environment, it becomes clear why such an approach to thinking about the world is rather powerful. An agent simply acts on its environment (sense and response) and the environment is itself all that which interacts with the agent. From this simple setup, the author then introduces a list of “Tentative Laws of Information” with each law appearing reasonable and intuitive.

While not a solution to all of ALife’s open problems, I do think it is an interesting way to reframe existing problems and assess whether different approaches to tackling them can be taken. Consider, for example, collective behaviour. It is difficult to consider how large swarms of individuals coordinate to produce collective behaviour without considering the forms of information that are shared, propagated, and transformed to achieve said behaviour. We can also begin to consider the scales of these systems: Simplistic individuals are only capable of acting locally, and cannot perceive more complex information globally, such as the current state of the swarm or its coordinated collective behaviour. Yet the swarm, as a more complex entity, seems to be capable of producing more complex information (behaviours) than any one individual is capable of, and reasoning about the relationships between the micro and macro levels are much more intuitive when we think about them within the proposed framework.

A system more easily translated into the world of evolving information may be Cellular Automata (CA). These systems have well-defined, often deterministic rules about how information (the state of a cell) changes based on the information that an agent (the cell itself) can perceive. Such a discrete system should be rather trivial to quantify, but how would something more complex, such as Lenia, translate into the same world? This is far less trivial, but presumably still feasible.

At the end of the paper, frustratingly, the author hints that such a framework could be implemented in simulation. I am curious whether the author has made any progress in this direction; of creating an information-based simulation that implements the tentative laws outlined in the paper.

A new course on Artificial Life

by Sam Kriegman

robots

In this, my first year as assistant professor at Northwestern University, I am building out my research group and teaching my very first course: Artificial Life. The syllabus is available here. One-hundred-and-thirty extremely bright and creative students are now learning about the wonderful world of ALife for the very first time, and I get to stand in front of them each week as Morpheus with a firehose of red pills, feeding them all of the beautiful and inspiring work of this community, and watching their minds explode.

Because Northwestern is on a quarter system, I have the impossible task of covering the entirety of ALife in just nine weeks. And, as a new assistant professor, I have been flying by the seat of my pants. I am writing this to share my excitement and, also, to solicit comments, compliments and criticisms from the community. You can email me directly or share anonymous feedback through the form linked in the footer of the syllabus.

This new ALife class stands firmly on the shoulders of Ludobots, a reddit-based MOOC that gently guides students, step by step, toward evolving possible brains (neural controllers) of a motile creature (kinematic tree) in a virtual world (rigid body simulation). After completing Ludobots, however, the students in this new course will be thrown straight into the deep end: they will be tasked with evolving not only brains but bodies too, and to do so without step-by-step instructions. I have no idea how this will turn out. We could end up with 130 populations of wiggling worms of varying lengths and sizes, without any real “march of progress” in terms of morphological and behavioral complexity. But I think this is unlikely. Did I mention how incredibly bright these students are? I made sure to warn the students that the assignments will suddenly become much more challenging midway through the course. What I have not told them (yet) is that they will be working on a problem that no one really has any idea how to solve. Who knows, maybe one of them will end up solving it.

Hiring PhD Candidate in Biologically Inspired methods for Robotics and Artificial Intelligence at the University of Oslo

by Kai Olav Ellefsen, University of Oslo

The PhD research fellow will carry out research on AI and machine learning techniques (including search / optimization) for robotic systems, at the group of Robotics and Intelligent Systems (ROBIN). We wish to build on our previous and ongoing projects in Evolutionary Robotics with the aim of developing new methods for more robust and flexible robotic adaptation. This could include combining Evolutionary Robotics with recent advances from Deep Learning or more biologically inspired methods, such as Neuroevolution.

Qualifications: Applicants must have education equivalent to a Norwegian masters degree in computer science, robotics, or other relevant field. Thus, applicants should have a strong background in programming, as well as machine learning/artificial intelligence and robotics. Experience with Evolutionary Algorithms, Evolutionary Robotics, Quality Diversity optimization, Reinforcement Learning, and/or Neuroevolution are considered advantageous.

Pay grade (depending on qualifications and seniority):
NOK 501 200 – 544 400 per year, approx.: € or $ 48,150 – 52,350

Announcement Page

Deadline for applications: February 1st, 2023
Applications are to be submitted through a web page and NOT by e-mail.

Contact for more information: Assoc. Prof. Kai Olav Ellefsen E-mail: kaiolae@ifi.uio.no

Abiogenesis (artwork by Markos R. Kay)

shared by Lana

Video still

I found this video imagining the origins of life in oil droplets mesmerising and wanted to share it with the community. The artist says:

"Presented here is a conceptual reimagining of the "lipid world" theory which postulates that life originated from lipids forming membranes which would then envelop matter and nutrients to form protocells. Biological cells as we now know them can be thought of us membranes within membranes." Watch the full video here.

Emergent Microcosms (a blog post by Samuel Arbesman)

shared by Lana

a screenshot of the game "orb"

Sam Arbesman recently wrote a vibrant Twitter thread, full of videos and links to creative platforms, about the concept of "Emergent Microcosms". He ultimately released a long form version of his thought through his newsletter. Here is a short excerpt from the first paragraph, enjoy the whole post here!

"In all the excitement around Large Language Models and other trendy aspects of Artificial Intelligence, I think that we’ve forgotten an under-appreciated group of computer programs: relatively small snippets of computer code that can generate complex and delightful virtual worlds. [...] Emergent microcosm is a fuzzy category, but it roughly spans biology and artificial life, complexity science, simulation, and creative coding." Read Sam's post

Upcoming Deadlines

Please see below for some upcoming deadlines for some conferences relevant to the ALife community:

  1. The 6th Workshop of Artificial Life Japan: Submission of Title/Author for presentations: 03 February 2023
  2. Gecco 2023 (Lisbon, Portugal & Online): Paper Submission Deadline: 10 February 2023
  3. Art of Cellular Automata Exhibition: Call for entries. Deadline: 20 February 2023
  4. ALife 2023 (Sapporo, Japan & Online): Paper Submission Deadline: 3 March 2023
  5. RO-MAN 2023 (Busan, South Korea & Online) Paper Submission Deadline: 17 March 2023

Call for Volunteers