Innat8 Future Plan
Our mission is to ensure that advancements in AI and robotics—particularly systems that can outperform humans in key areas—contribute to the overall well-being of humanity.
If highly advanced AI and robotics are successfully developed, these technologies could elevate human society by enhancing prosperity, accelerating economic growth, and unlocking new scientific discoveries that push the boundaries of what’s possible.
AI and robotics have the potential to provide people with extraordinary new abilities. We envision a world where everyone can access support for nearly any task, amplifying human ingenuity, creativity, and problem-solving.
However, such powerful systems also pose significant risks, including misuse, accidents, and societal disruption. Given the tremendous potential benefits, we don’t believe halting the development of AI and robotics is either feasible or desirable. Instead, society and technology developers must find ways to ensure their responsible creation and use.
While we cannot predict the future with certainty, and acknowledge that progress may face unexpected challenges, we can define the principles that matter most to us:
1.
We want AI and robotics to empower humanity, enabling it to flourish to its fullest potential. While we don’t expect a flawless utopia, we aim to maximize positive outcomes and minimize negative impacts, making these technologies amplifiers of human potential.
2.
We want the benefits, accessibility, and governance of AI and robotics to be shared equitably across society.
3.
We want to carefully navigate the significant risks these technologies pose. In addressing these risks, we recognize that theory often diverges from practice in unexpected ways. We believe in deploying less powerful iterations of AI and robotics to continuously learn and adapt, reducing the need for “one shot to get it right” scenarios.
Short-Term Plan
There are several key actions we believe are necessary now to prepare for advancements in AI and robotics. First, as we develop increasingly powerful systems, we aim to deploy them in real-world settings to gain experience and insights. This gradual approach to integrating advanced AI and robotics allows for a smoother transition, which we believe is preferable to a sudden shift. We expect that powerful AI and robotics will accelerate progress in many fields, and an incremental introduction will help society adjust more effectively.
A gradual transition provides people, policymakers, and institutions the time to understand these technologies, experience both their advantages and drawbacks, adapt the economy, and implement necessary regulations. This co-evolution between society and AI/robotics will allow collective decision-making at a time when the stakes are still relatively manageable.
We currently believe the best way to navigate the challenges of AI and robotics deployment is through a tight feedback loop of rapid learning and careful iteration. Society will confront significant issues, including determining the appropriate roles for AI, addressing bias, handling job displacement, and more. The optimal solutions will depend on how the technology evolves, and as with many new fields, expert predictions have often been incorrect. This makes long-term planning in isolation quite difficult.
In general, we think broader use of AI and robotics will result in positive outcomes, and we support encouraging its adoption (by making models available via APIs, open-sourcing, etc.). We believe that democratizing access will lead to more robust research, the decentralization of power, broader benefits, and a wider range of new ideas and innovations from diverse contributors.
As our systems approach higher levels of sophistication, we are adopting an increasingly cautious stance regarding their development and deployment. Our decisions now require far more caution than is typically applied to new technologies—and more than many users might prefer. Some experts argue that the risks associated with advanced AI and robotics are exaggerated; while we hope they are right, we will operate as though these risks could be significant.
At some point, the balance between the benefits and risks of continued deployment—such as enabling malicious actors, causing social and economic disruption, or fueling an unsafe race—may shift. If that happens, we will reassess our plans and adjust our deployment strategies accordingly.
Second, we are advancing towards developing increasingly aligned and adaptable systems. Our transition from early AI models to the Ava Core System (ACS), a modular, block-based system designed to emulate the communication behaviors of the human body, is a prime example. This universal system aims to merge AI with robotics, creating a foundation for seamless integration between the two fields.
Specifically, we believe it is critical for society to reach broad consensus on the acceptable uses of AI and robotics, while allowing individual users significant freedom within those bounds. Our long-term vision is that global institutions will align on these parameters, but in the meantime, we plan to run experiments and gather external input to shape our approach. Strengthening these institutions with new capabilities and expertise will be essential as they face complex decisions regarding AI, robotics, and systems like ACS.
The “default mode” for ACS and other related models, like inn8-montuno—our first model capable of predicting human emotions through text, audio, and vision, going far beyond basic sentiment analysis—will likely be quite constrained. However, we aim to provide users with the ability to customize the behavior of these systems to suit their individual needs. We believe in the power of individual choice and the value of diverse perspectives in shaping how these technologies evolve.
As ACS and inn8-montuno become more advanced, we will need to develop new alignment techniques and create methods for identifying when current approaches fall short. In the near term, we plan to use AI to assist humans in evaluating outputs from increasingly complex models and monitoring intricate systems. Over time, AI itself will help us discover new alignment strategies that ensure these systems remain safe and aligned with human goals.
Importantly, we believe that advancements in AI safety and capabilities must progress together. It is misleading to consider them as separate; in reality, they are deeply interconnected. Our best safety breakthroughs have emerged from working with our most capable models. Nonetheless, it is crucial that the ratio of safety improvements to capability advancements continues to grow as our systems become more powerful.
We believe it is essential for initiatives like ours to undergo independent audits before launching new systems; we will share more details on this later this year. In the future, it may be necessary to have independent reviews before initiating the training of new systems, and for the most advanced projects to agree on limiting the growth of computational power used to develop new models. Public standards for when an AI or robotics project should pause a training run, determine a system is safe to release, or withdraw it from use are crucial. Additionally, we think it’s important for major governments worldwide to have visibility into training runs that exceed a certain scale.
Long-Term Plan
We believe the future of humanity should be shaped by humanity itself, making it crucial to keep the public informed about significant progress. Efforts to develop advanced AI and robotics must be subject to intense scrutiny, with public consultation playing a key role in major decisions.
The development of the first highly advanced AI or robotics system will mark just one point along a continuum of intelligence and technological progress. We expect advancements to continue, potentially maintaining the rapid pace of innovation seen over the past decade for a significant period. If this proves true, the world could evolve in ways that are vastly different from today, and the associated risks could be immense. A misaligned super-intelligent system could cause severe damage, and a regime with access to such technology could pose an even greater threat.
AI capable of accelerating scientific discovery is a particularly important case to consider, as its impact could outweigh all other applications. AGI or robotics systems that can drive their own progress might lead to rapid, transformative changes. Even if the initial transition is gradual, the final stages could unfold quickly. We believe that a slower, more controlled progression would be easier to manage safely. Coordination among advanced AI and robotics projects to decelerate at critical points will likely be necessary, even if it’s not required to solve technical alignment issues. Slowing down might be essential to give society time to adapt.
Successfully transitioning to a world with super-intelligent AI and robotics is possibly the most significant—and both hopeful and daunting—undertaking in human history. Success is far from guaranteed, and the stakes, which range from boundless risks to boundless benefits, should ideally bring us together.
We can envision a future where humanity thrives in ways that are currently beyond our imagination. Our aim is to contribute to a future where advanced AI and robotics are aligned with that vision of flourishing.