20220516

2022 05 16

I’ve been thinking more about what the future of AI might look like. This recent result from Deepmind. In brief it is a neural network that can both act via a robot and play digital games as well as caption images. It’s the first example that I have seen of an agent performing multimodal tasks with the same architecture (and learned weights!) for all tasks. In a way it feels like artificial general intelligence is not that far off. If there is an exponential growth curve to the ability of AI, then things could change very rapidly in the next few years. How to define and measure this growth curve is difficult. As far as I am aware there is no well defined equivalent of Moore’s Law or Metcalfe’s Law for AI. So it is a bit hard to draw any comparisons. This Kurzweil chart may simply suggest that its the underlying compute power that matters; in which case then Moore’s Law is the exponential for AI as well. The success of recent very large networks suggests also that maybe scaling network size is all that really matters as well too.

I have a lot of thoughts related to this. I think a lot of people are brushing this under the rug and goalpost shifting just beyond AI’s current capabilities. Chess was hard and now its easy, Go was hard but now its easy, Atari was hard but now its easy, etc. These are just games and doing anything real that humans can do is hard. I wonder if we lack humility as a species to recognize that perhaps our intelligence isn’t that great and what we’re doing is just playing various games ourselves that we convince ourselves are nontrivial. I suppose we will find out soon enough should the exponential be believed.

Another thought is whether or not that are natural limits to the capabilities of extreme intelligence that we just aren’t aware of. I think we assume a superintelligent AI to be absurdly powerful, but maybe it can’t do much more than we can already because of limitations of physical reality. It’s hard to state what these might be, but trying to imagine something mystical about an AI as a concrete reality is useful. Consider for example having an AI that can see through my phone’s camera. If I show it a burrito I order it will not be able to know how it was made without knowing the process of how it was made or what I ordered (even then it could be an error in receiving something different). This seems like something trivial that a “superintelligent” system should be able to figure it out, but without more information it cannot. Maybe with sufficient sensors it could do on the fly X-ray type imaging, or maybe it can communicated with other AI systems that are aware of how this burrito was made. Maybe its a part of a global computer that can trace the entire supply chain of production that went into making this burrito. Are these feasible things that an AI can figure out how to do? I don’t know.

I recently learned that Norbert Wiener advocated against AI systems and used Mickey Mouse and the brooms in Fantasia . Mickey has a simple task that he wishes to automate, but it quickly gets out of control and needs an even more powerful authority to step in. However, we don’t have any authority to step in and help us if our AI systems run astray. It’s a chilling thought, because it seems extremely plausible. The more prevalent powerful AI systems become then the more likely such a mishap is to happen. I guess we’ll just have to wait and see.

Youtube recommended this video to me about printed circuits. What was most interesting to me right away was seeing how PCBs were originally manually designed and then quickly replaced by computerize design methods. Got me thinking about how we bootstrap tools to make more powerful versions. With computers this seems to be a neverending process that results in more and more powerful machines. We used our most rudimentary computers to help us create more effective computers and then we made better software to help us make better computers. We’re at the point where we use AI to design computers which we in turn use to make better AI. The more sophisticated the computers, the more sophisticated the tools that are required. I think there’s some nice general idea/theory here about tool development, but it was interesting to become aware of this pattern.


Daily Listening

I’ve settled on really liking these two tracks from this future funk album. (one two)

Daily Reading

Read a bunch of Jim Morris’ autobiography while waiting at graduation events this weekend. Very interesting reading a more firsthand account of events that happened at Xerox PARC that are also from a more tools building side of things than the user side that Alan Kay focused on.