Discontent in the world of AI research.
I came across this article today in MIT's Tech Review: http://www.technologyreview.com/computing/37525/page1/
Some of the big names in AI research have become unhappy with the state of the field. The shift away from basic science to more specialized commercial applications and the fracturing of the AI community into subfields that tackle only one specific problem, or tackle a problem in one specific way are major contributers to this dissatisfaction. Noam Chomsky and Barbara Partee also spoke at the conference. They said enough to show that they are linguists not in close contact with the research available to other fields that make contact with AI research. Probably the most generous thing that I can say about them.
The article is short and fairly superficial, the only reason I bring it up here is because I know that we have some people on the forums in the AI or allied fields that want to share their thoughts.
I'll start.
1) The two main problems described aren't limited by any means to AI research- I know that they are every bit as present in the cognitive sciences. But I think that maybe they're felt more acutely by the AI field. Here's why- the computational solutions to applied problems are much different than those that would be investigated by basic research with the goal of strong AI. For applied use, they are highly constrained (as in they only do one thing, or one kind of thing) and computationally efficient solutions for this problem are much different than human-like solutions that work across a wide variety domains. That's my impression, anyway. But, I've never worked in the lab where the goal was producing strong AI, either.
2) "Really knowing semantics is a prerequisite for anything to be called intelligence" is not something that I agree with, and I think that there's ample evidence out there that this isn't the case. But there's always grey area when intelligence isn't defined, I suppose.