Peter Voss has a bio at Accelerating Future
Question: A number of AI researchers, including Josh Hall, Ben Goertzel, Itamar Arel, and yourself, have argued that artificial general intelligence could arrive within a decade. What hard evidence exists to support such predictions?
Answer: The evidence at this point is more indirect than direct. General AI systems will fail to impress until they approach human intelligence. But AI researchers have become more sanguine because they see that we are not hitting any brick walls on performance and we are generally not hitting any serious roadblocks in creating and implementing our AI algorithms.
Question: IBM recently claimed to have simulated a portion of a cat brain. How close is this to a real-time simulation of a whole cat brain?
Answer: Not very close. It demonstrated a statistical correlation of neuronal activity at an arbitrary level of resolution. It was nowhere near a simulation of an actual brain.
Question: So what would a brain simulation need to accomplish in order to be truly noteworthy?
Answer: The real milestones in this development would be if a simulated cat brain were connected to an actual robot that could actually interact with its environment in a meaningful way. Then you would know that you had something nearly equivalent to an actual cat brain. An even more impressive milestone would be if you scanned in an animal's brain into a computer, and found that the personality and memories of the animal had transferred to the machine. But we are a long way away from that.
Question: Tell us about Adaptive Artificial Intelligence. What is special about its call-center software?
Answer: This is the first application using a modified AGI engine that actually drives a commercial product. This software allows for more natural and flexible conversations. But it also provides a real-world testbed for the AGI engine. The call-center market is a $300 billion industry, and therefore has the potential to generate significant revenue which can be plowed back into AGI research and improving the software. This feedback loop could prove crucial to the development of a true AGI system.
Question: Has there been any evidence to date of computers showing unprogrammed initiative?
Answer: We call unprogrammed initiative bugs! No, none of the AGI projects are far enough along to show high-level abstraction. But we do occasionally experience programs that end up showing more intelligence or insight than we had originally predicted.
Question: Ray Kurzweil has argued that emulating human intelligence requires approximately 10^17 calculations per second. Do you agree with his assessment?
Answer: I believe that Kurzweil's estimate is way too high. His estimate is predicated on reverse-engineering the human brain, which I believe is not a particularly good way to achieve AGI. Reverse engineering the human brain is a huge endeavor which will likely take decades while being quite unnecessary for creating AGI in computers. We didn't need to reverse-engineer birds in order to design airplanes. So I'm convinced that it will require considerably less computing power to emulate human level intelligence.
Question: So would a computer that runs 10x as fast as a human brain be 10x as intelligent?
Answer: It would have greater mental acuity, but I don't believe that intelligence scales linearly with processing power. It would certainly be extremely useful.
Question: To what extent is AI research being constrained by insufficient funding?
Answer: The field is hugely resource constrained. I am unaware of any multi-million dollar AGI projects, let alone any multi-billion dollar programs. About eight years ago, DARPA had a program that sounded like AGI, but that program has largely been abandoned. Unfortunately, I don't see the situation changing in the near future. Getting funding is hard. Until you have full blown AGI the demos aren’t very impressive.
Question: To what extent does AGI benefit from narrow AI research?
Answer: The vast majority of AI funding goes to narrow AI projects, and this leads many people to incorrectly conclude that multi-billion dollar AGI research projects exist. So in that sense narrow AI research is actually a hindrance. However, tools and algorithms for narrow AI are consistently improving, and this is beneficial. Overall narrow AI research does not significantly impact AGI research.
Question: Is Google or any other major corporation, institution, or Government engaged in AGI research?
Answer: Although Google has considerable R&D resources, there are not to my knowledge engaged in AGI research. I'm not aware of any company or Government that is currently engaged in AGI research. Corporations generally do not fund efforts that require a decade to reach fruition. The number of AGI researchers is tiny and they all seem to be accounted for.
Question: What signs will emerge indicating the AGI is near?
Answer: The signs will be quite subtle, and will probably be missed by most people. The virtual agents used in our call centers can sometimes feign intelligence, but it is quite brittle. So when high-level AGI systems do emerge initially they will just seem like very clever narrow AI applications.
Question: You have predicted that AGI will emerge within the next decade. But how likely is this given the dearth of funding?
Answer: I believe that it is highly likely that AGI will be achieved given sufficient funding.
The problem is funding, and the only strategy that I have any confidence in is the approach that I am taking. We are hoping that my company, Adaptive Artificial Intelligence, will be commercially successful, and will use the profits generated to directly fund AGI development. This has the advantage that it will simultaneously benefit the call center product and advance general intelligence capabilities. We are growing rapidly and should soon be profitable, so this concept has real merit.
Question: Some have argued that AGI will quickly lead to molecular manufacturing capabilities. Do you agree?
Answer: Yes I do. Molecular manufacturing fundamentally requires a lot of brain power, so once we have AGI systems we can allocate their intellects towards solving these theoretical problems.
Question: Assuming that funding issues get resolved, how do you see AGI impacting the world in the 2020-2030 decade?
Answer: AGI will bring about major changes – and they will happen quickly. Specific developments are extremely hard to predict. My central goal is to develop artificial general intelligence, and use that intelligence to rapidly improve the human condition. Many of humanities most vexing problems can be solved with greater brainpower. The 2020s could witness the emergence of superhuman intelligence, molecular manufacturing, radical life extension, and abundance in many aspects of human life - it won't be a boring decade