Discontent in the world of AI research.

I came across this article today in MIT's Tech Review: http://www.technologyreview.com/computing/37525/page1/

Some of the big names in AI research have become unhappy with the state of the field. The shift away from basic science to more specialized commercial applications and the fracturing of the AI community into subfields that tackle only one specific problem, or tackle a problem in one specific way are major contributers to this dissatisfaction. Noam Chomsky and Barbara Partee also spoke at the conference. They said enough to show that they are linguists not in close contact with the research available to other fields that make contact with AI research. Probably the most generous thing that I can say about them.

The article is short and fairly superficial, the only reason I bring it up here is because I know that we have some people on the forums in the AI or allied fields that want to share their thoughts.

I'll start. :grin:

1) The two main problems described aren't limited by any means to AI research- I know that they are every bit as present in the cognitive sciences. But I think that maybe they're felt more acutely by the AI field. Here's why- the computational solutions to applied problems are much different than those that would be investigated by basic research with the goal of strong AI. For applied use, they are highly constrained (as in they only do one thing, or one kind of thing) and computationally efficient solutions for this problem are much different than human-like solutions that work across a wide variety domains. That's my impression, anyway. But, I've never worked in the lab where the goal was producing strong AI, either.

2) "Really knowing semantics is a prerequisite for anything to be called intelligence" is not something that I agree with, and I think that there's ample evidence out there that this isn't the case. But there's always grey area when intelligence isn't defined, I suppose.

72,149 views 17 replies
Reply #1 Top

You are right to point out that this is not confined to the field of AI, but is fairly widespread across all scientific fields. We are living in a time where academic research has begun to compete with privately funded patented research in ways that were inconceivable twenty years ago. In the field of Computer Science, most brilliant minds are not entering academia as researchers, but are instead entering the corporate work force, either by being hired by large corporations or by starting their own companies. Thus, those computer scientists who do choose to enter academia are not the best and brightest individuals, and their research can be sometimes years behind that being performed in the private sector. These individuals however must then struggle under the harsh academic policies placed upon them to produce papers and ultimately research dollars. With these kinds of restrictions, academics can not waste time with trivialities such as understanding, but are forced by the system to produce publishable results that are fundamentally marketable.

Reply #2 Top

And sometimes markedly flawed. Lets not forget the role funding plays in this. Corporate control mean fast results and cut corners to save money. They are more profit minded and when profit steers the research it no longer is science but business. Business wants to make money and the more the merrier, the faster the better. Very few are cautious enough to look at the long term implications. That will be their undoing.

Reply #3 Top

Yeah, I agree completely. Not only that, but even in a field where applications are still somewhat limited, the opportunities for, and sources of, funding have changed substantially. We have a grant right now through NIMH, but only because we bent over backwards to tell the story of our research program through its potential benefits for therapy. We likely won't even be able to do that 3 or 4 years from now.

Interestingly enough, the main (and pretty much only, as far as I'm aware) source of funding for basic cognitive modelling is through the Army, Navy, and Air Force and through DARPA.

Honestly, I think that basic science needs better PR. You do polls, and nearly everyone agrees with the proposition: "America should be a leader in the sciences," but, in practice, what they really mean is: "Only if the benefits are immediate."

Edit: Uvah beat me to it.

Reply #4 Top

Sorry about that LighjtofAbraxas but I couldn't resist. Whenever government types i.e. Army, Navy, Air Force get involved there is but one thought in mind. Weapon. Can it be used in a fighter craft? Yes. A combination of the head up display and special head gear that allows the pilot to trck a target with his eye movement. BANG! You're dead. Such systems came about with the F-15 and are incorporated in the F-22 and F-35. Air Force.

Army. Robot warriors, eyes in the sky, UAV's. Battlefield probes no bigger than an insect that can stick to the walls like a fly.

Navy. Same thing. Remote controlled UAV's like the Predator. Launch them in pairs. One is the hunter, the other the killer. Taking out shore batteries maybe or guiding smart missiles right on target. Desert Storm. A smart bomb put right through a second story window, dead center. Navy planes usually run in pairs. All use the technology that came out of this kind of research.

DARPA: Mind control. Nuff said. 

Reply #5 Top

It's no problem, and I do agree with you. It's essentially the weaponization of math, isn't it? Not that this is new. The calculation of artillery trajectories was a big driving force behind the development of computers early on. Which, as you mentioned, has evolved into guidance systems for smart bombs.

And yeah, all I can think of is the creation of more advanced data mining algorithims for intelligence gathering and to make Predator drones smarter. I'm not sure that I'm sold on mind control quite yet, but I think that, as far as the Army goes, we're going to see exoskeleton technology come online in the next 5-10 years, and eventually, when artificial systems are able to learn about the environment effectively, the human CPU will be replaced altogether.

*Puts speculation hat on* I think that it's interesting to note that the warfare profession is likely to go through its own Industrial Revolution in our lifetime, with machines taking over a significant proportion of front-line duties. I think that we're starting to see it in the air, already. And we'll likely see it in the water and on land after that, too.

I think that the main question, for me, is whether it will ever be cost-effective or technologically feasible to substitute out the remote human controller for artificial intelligence. I guess that's what cognitive scientists are trying to figure out, whether they know it or not.

Edit: This is an Epic Digression, but very interesting. Anyone else have any thoughts?

 

Reply #6 Top

Eventually  I think we will see humans removed from the front lines. With government and private sectors offering more money than the scientific/research sector, technology will develop faster in weapons. Especially in artificial intelligence.

I read an article that researchers and engineers have already developed computers that learn on there own by mimicking the human thought process. Not long ago, a computer beat a human on Jeapordy using this technology.

I remember (when I was much younger lol) watching Terminator and hearing about neural networks. Fairly futuristic at the time. Now? Not so futuristic.....

And as for land- or sea-based UV.... well.... look up the (ARV)UGV.....

Reply #7 Top

This evolution of the field is unavoidable. Academics are simply not well suited to attract capital. The best research will happen behind closed company doors, because they can pay the best and they can make the research pay off for them (universities can do neither).

The question is, does the added capital from patent-hungry companies outweigh the benefits of having an open field? I say yes. Companies are responsible for the vast majority of technological advancements in this century, and the majority of the capital as well.

A practical application of a field is not something negative. Does it slow the advancement of the theory of the field? Maybe. Or maybe the actual applications of what they are doing will attract new minds and new capital that never would have considered the field otherwise (example, the annual starcraft AI competitions). A good scientist knows that EVERYTHING ADDS TOGETHER in a field, and ANY science is driven by an interest (whether personal or profitable).

Ultimately it comes down to power, and the control of power. The "scholars" are grumbling because they want more. The companies want more. They'll struggle and make pointed arguments like the article. But... why should I care for power-hungry people? Which greed is more right?

 

 

Reply #8 Top

Greed is greed regardless of who's waving the flag. There is corporate greed and scientific greed. One is money which equates to power. The other is power which equates to money. Both go hand in hand. When you've got one you've got the other. Science as science is only for those who have yet to get bitten. Every scientific endeavor has its PR person who goes out there to scrape up funding for the project. Problem is those with the money want control because in their eyes its their money so they feel they should call the shots. Not those doing the actual work. But....without that funding research grinds to a halt. No altruisms here.

 

 

 

*forums go boom* twice so far.

Reply #9 Top

Quoting Heavenfall, reply 7
A good scientist knows that EVERYTHING ADDS TOGETHER in a field, and ANY science is driven by an interest (whether personal or profitable).

The problem arising from this point is that privatized research does not add together for the betterment of all people. Thus, many areas of research are constantly being reinvented due to corporate and governmental secrecy, while others are being stifled completely due to creative application of patent law. As you elegantly pointed out, the truth is that such knowledge is power and academics much like private researchers simply wish for a piece of this power. While academics would have you believe that they desire to expand the overall knowledge of humanity, their overall practices are shockingly similar to those of private researchers.

Reply #10 Top

Wow, a bunch of cynics here, huh? I guess I just don't see what's wrong with wanting to make make money for your empoyees and stakeholders, or to advance your career if it's going to put you in a better position to contribute to the corpus of human knowledge.

Quoting Heavenfall, reply 7
This evolution of the field is unavoidable. Academics are simply not well suited to attract capital. The best research will happen behind closed company doors, because they can pay the best and they can make the research pay off for them (universities can do neither).

True, but I would disagree with one thing, here. I think that the pay gap between public and private sector research has been much exaggerated. A full professor at an R1 institution can easily make over $100,00 dollars a year, and as much as 15-20% more if they carry a full load through the summer. Add to that fairly generous matching to 401(k) contributions, and you're making a pretty fair living. Granted, there aren't a huge number of spots at the top level, but if you're bright, and you work hard, it's definitely doable. I blame political rhetoric for propogating the myth that all the best and brightest are naturally in the private sector because the pay is so much better. It's true to some degree, but not nearly as much as it's been portrayed.

Academics are also free to pursue patents of their own, and often do. I know of a couple labs doing work with pharmacological adjuncts to behavioral treatment of PTSD. A couple drugs have patents pending. One for a drug that enhances 'extinction,' as it's called in the field, and another that disrupts memory reconsolidation.

Quoting Heavenfall, reply 7
A practical application of a field is not something negative. Does it slow the advancement of the theory of the field? Maybe. Or maybe the actual applications of what they are doing will attract new minds and new capital that never would have considered the field otherwise (example, the annual starcraft AI competitions). A good scientist knows that EVERYTHING ADDS TOGETHER in a field, and ANY science is driven by an interest (whether personal or profitable).

Again, I agree with most of this. But, the main difference is that one of the things that makes science in the public sector work is the openness of information (as kenata mentioned above). The end result of any research project is publication for all to see, replicate, and extend. Corporate science is understandably cloistered. And why wouldn't it be? Why would I want some other corporation benefitting from the research I've paid for?

Also, I agree that the degree to which private sector research benefits the field depends on the field. I think of genetics as having parallel goals, because the basic science leads to immediate applications. But in others, the goals run at closer to 90 degrees to one another. The problems that need to be solved by applied 'AI' research and basic AI research tend to be very different.

Reply #11 Top

I guess I'm one of those guilty people using AI for corporate greed.   I'm doing research in neural nets and I'm trying to train them for a very specific application, and I'm laser-focused on solving about 3 specific problems with them.  I seriously, seriously doubt that the academic community will suffer because of the work I'm doing, because I'll probably be publishing a couple papers and maybe filing for patent on this.  In fact--2 of the problems are so compute intensive that I think I'm going to have to code the GPU to run them.  Prior research papers are showing a 10X to 215X speedup by programming neural nets on a GPU.   I'm hoping to build on that research and solve a really tough problem that previously would not have been possible.   Programming a GPU to do things other than graphics is known to work, but is a very new thing.

Reply #12 Top

Quoting tetleytea, reply 11
I'm doing research in neural nets and I'm trying to train them for a very specific application, and I'm laser-focused on solving about 3 specific problems with them.

OHHHH! Any chance that I can get you to elaborate? Umm, I understand that there might be NDA issues involved, in which case, I won't insist. But now you have my interest piqued.

Also, I didn't mean to imply that applied research 'hurts' basic research in any way less benign than taking a larger share of brains on the problem than they had in the past.

Reply #13 Top

tetleytea, maybe you could help out in the AI of fheroes2! ;-)  That would be a welcome application of AI, although you might see it as a bit limited.  But if the possibilities of the game are expanded, there would be more use for advanced AI.  But it would be great to have AI that is tough without needing to get resource bonuses! =)

Anyway, just a thought.  I suppose you are restricted as in what you can use it for. =)

Best regards,
Steven.

Reply #14 Top

Quoting StevenAus, reply 13
But if the possibilities of the game are expanded, there would be more use for advanced AI. But it would be great to have AI that is tough without needing to get resource bonuses!

There has been some research into gaming within the AI field, however, the type of game AI you are discussing is a fairly distinct beast. In general, one could build an AI for Elemental, which produced a near optimal turn once per day, and this type of AI might be desired by some players. Yet, for the overwhelmingly larger majority of users, this would be a completely absurd proposal for a workable AI. For practical gaming applications, it is far more important that an AI be quick in its decision than ultimately correct in those decisions. Thus, an AI developer for video games tends to sacrifice move optimality for a reductions in the computational complexity, both in terms of memory usage and time, by using scripted reactions instead of dynamic reactions.

Reply #15 Top

Well, I was more referring to fheroes2, but I guess that would have the same problem with turns taking too long (though only heroes can move in that game).  But I see what you mean regarding Elemental.

Best regards,
Steven.

Reply #16 Top

I would love to work on HOMM2 period, but I have to admit, I come home brain-fried every day and I don't feel like coding yet something else. 

I guess I could elaborate on one of the problems, but the other two (that I won't be) are actually the more interesting/promising by far.   It's to load-balance an internet server farm or cloud.   You have incoming requests from all over for CPU, memory, disk load, etc..  All very random.  When the job first starts, you have no idea how long it's going to take, how much it's going to load down the disk in the end, or how much it's memory footprint is going to be at its largest.  You have to guess.  You have servers on the cloud--say, 12-core Opteron boxes, 64 Gig RAM, and maybe a shared file server.   If you guess wrong and share too many jobs in parallel on the same server, you get some serious disk thrashing.   We're attempting some guesswork now to try & avoid that situation, but what I'm doing is feeding all the historical behavior of previous jobs into a neural net, and based on the nature of previous jobs (which have a known set of arguments, platform run on, memory footprint, disk usage, run time, etc.), find the correlation with incoming requests and guess what its resource consumption is going to be.   Very often no two jobs are exactly alike.  That's what neural nets are good at:  making a guess toward something it's never seen before based on what it's learned in the past.

Anyway, that's the easy one.  You have a known set of inputs.   On these other two, the inputs are dynamic at run-time.  Plus if you know anything about gradients in neural nets, I have a 1600-dimensional graph I'm having to cut a flat plane against.  Usually NN are in the realm of 8 dimensions at most.  That's why I need a GPU.

Reply #17 Top

Wow.  Yes, I can see why that fries your brain. =)  And that's one thing about corporate research, I would think that it would frequently be high-stress.  Not saying that academic research is less stressful, just that when you're always searching for new ways to make money I can guess that it would tend to be very stressful.

Best of luck with it! |-)   Not much happening in fheroes2 at the moment, the maintainer is busy in RL.  As are you. =) ;-)

Best regards,
Steven.