An AI box is an isolated hardware system where an artificial intelligence is kept constrained inside a simulated world and not allowed to affect the external world. Such a box would have extremely proscribed inputs and outputs; maybe only a plaintext channel. However, a sufficient intelligent AI may simply be able to escape from any box we can create. For example, it might crack the protein folding problem and use nanotechnology to escape, or simply persuade its human 'keepers' to let it out.
Some intelligence technologies, like seed AI, have the potential to make themselves more intelligent, not just faster, by modifying their source code. These improvements would make further improvements possible, which would make further improvements possible, and so on.
This mechanism for an intelligence explosion differs from an increase in speed in that it does not require external effect: machines designing faster hardware still require humans to create the improved hardware, or to program factories appropriately. An AI which was re-writing its own source code, however, could do so while contained in an AI box.
- Yudkowsky, Eliezer (2008), Bostrom, Nick; Cirkovic, Milan, eds., "Artificial Intelligence as a Positive and Negative Factor in Global Risk", Global Catastrophic Risks (Oxford University Press), Bibcode 2008gcr..book..303Y, ISBN 978-0-19-857050-9, http://singinst.org/AIRisk.pdf
- Artificial Intelligence Will Kill Our Grandchildren (Singularity), Dr Anthony Berglas
- The Singularity: A Philosophical Analysis David J. Chalmers