Next-gen AI and Next-gen BS

 Pages PREV 1 2
 

ForumSafari:

VoidOfOne:
In any case, very good story. I still feel skeptical when someone mentions X will make games better, in one way shape or form. My X being the cloud, or cloud servers. I still don't fully comprehend the matter, and an explanation from anyone is welcomed.

The Cloud was the subject of my dissertation, any specific questions you're interested in?

Sure, and thanks. Mainly, there's been a lot of talk about how having access to the cloud could enhance gameplay. I remember that there was mention of this with the upcoming game Destiny by Bungie. My question is how do they think that access to the cloud could accomplish this?

Bad Jim:
There's another way to categorise AI. AI that uses a few if/then statements to decide what to do, and AI that actually tries to calculate the best move.

The first kind is what nearly all current games use. It's cpu efficient but it is limited to the tricks that the programmer will teach it, and there are always exploitable holes in the logic.

The second kind is how chess AI works. It still makes mistakes, but it is much less exploitable and can potentially be much smarter than the programmer who designed it. The catch, of course, is that its' effectiveness depends on the cpu cycles you can feed it.

Galactic Civilizations 2 does actually attempt to analyse the game like this, and it is noted for its' smart AI. And yes, that AI still has many flaws, but it's also considered to be miles ahead of other 4x games.

I was going to bring up Gal Civ 2's AI, but once again, I'm beaten to the punch.

Instead, I'll expand on the design philosophy presented: different AIs do different things, but all AI behavior can be described in two ways: Proactive (Strategy) and Reactive AI (Combat AI). Which makes sense, since that's how gameplay is designed (barring games of chance).

Aimbots are purely reactive. They don't really pre-aim or funnel, or really even flank because they don't have to.
They're going to kill their target the moment the game mechanics permit it. (The Loque bot from UT99 was infamous for this; if it gets a sniper rifle, it will win every encounter the second it can see you, without fail)

Another reactive AI is the "auto-dodge" AI.
I see these a lot in fighting games, but they also occasionally pop up in shooters and vehicle games.

VoidOfOne:

ForumSafari:

VoidOfOne:
In any case, very good story. I still feel skeptical when someone mentions X will make games better, in one way shape or form. My X being the cloud, or cloud servers. I still don't fully comprehend the matter, and an explanation from anyone is welcomed.

The Cloud was the subject of my dissertation, any specific questions you're interested in?

Sure, and thanks. Mainly, there's been a lot of talk about how having access to the cloud could enhance gameplay. I remember that there was mention of this with the upcoming game Destiny by Bungie. My question is how do they think that access to the cloud could accomplish this?

The claim of cloud processing, as an idea, is simple in theory:
Access to remote processing could assist games by offloading some of the processing load from the local system.

The problem with this claim, is that video game processing is overwhelmingly TIMELY.
That is, a lot of numbers need to be crunched in a relatively short period of time.

Games have other assets which require far more processing cycles than others, and come with similar but different processing arrival schedules. But compared to graphics/rendering, these assets are trivial; hence why most optimization efforts are put into the visual components.

On top of that, Cloud Processing requires a network connection (Internet, in this case), and that means the strong potential for lag. Since games are so timely, even a TINY amount of lag will have a very noticeable impact on the process.

There are a few tricks to try and mask this issue and keep the game optimized, but the skinny of it is that unless the end user has an incredible connection to the Cloud (exceptionally low latency; we're talking under 20ms at least) the Cloud isn't going to help much if at all.

AI processing, as described in the article, isn't really that resource intensive. Sure, it's a complicated decision making process relative to us, as people, but computers only cares about crunching numbers, not concept.

The math requirement of rendering involves a lot of "big" numbers; far more than any AI decisions, and exponentially more for even a minor increase in graphical fidelity.

Incidentally, AI programming, though trivial, is also timely since the computer's decisions can impact the game state in real-time as well. Meaning it too would be a poor choice for Cloud Computing, though not nearly as bad as rendering.

So not only are there few functions that Cloud Computing would significantly benefit, but the biggest one that it could help is too timely to reliably work over an internet connection.

Another way to look at it: Cloud computing is most beneficial for solving big problems slowly; like scientific equations with large degrees of accuracy (huge Taylor Series, if you've dabbled in Calculus). Equations that are easily broken into smaller chunks and are given relatively long amounts of time to crunch.
Which is the diametric opposite design virtually all video games employ, by necessity.

VoidOfOne:
I remember that there was mention of this with the upcoming game Destiny by Bungie. My question is how do they think that access to the cloud could accomplish this?

'The Cloud' is basically highly available computing as a service. I'm not sure how Destiny was going to use it but it's sued by other games to outsource computation to more powerful computers and to handle saved games. The saved games storage is easy enough, that means you can pick it up from anywhere, but the more interesting idea is the computation in the cloud.

Basically a console isn't a terribly powerful platform all told and modern games have a lot of calculations in them. By moving physics calculations, AI behaviour or other stuff like that out to a more powerful series of computers they can carry out the computation a lot faster than you could on your xbox and just push the results back to the console, allowing the console to render more complex visuals or do other stuff with its' resources.

The other thing that the Cloud could be used for is less a benefit of the Cloud and more a benefit of client/server architecture generally. When you run a multiplayer game generally the content is being hosted on someone's console...who probably has the same slightly shitty Internet connection and hardware as everyone else, but who is now playing their game AND hosting the map for everyone else. By locating the server for the maps in the Cloud, automatically moving the server instance to the datacentre with the best average connectivity to the players and moving a lot of the calculations off of the consoles and in to the server they can ensure a better average connection since they've got enterprise-grade connections and a faster game seeing as they're calculating things like bullet drop and stuff for you. You're therefore computing less on local machines and sending less back up your home connection, meaning your connection is probably more stable and your console can use its' resources for better things.

ForumSafari:

-Snip-

Thanks for the answer! And also thanks for answering in a way I understand; I appreciate it.

I guess we'll soon see if the Cloud is as a boon to gaming as advertised. Would be awesome if so.

VoidOfOne:

ForumSafari:

-Snip-

Thanks for the answer! And also thanks for answering in a way I understand; I appreciate it.

I guess we'll soon see if the Cloud is as a boon to gaming as advertised. Would be awesome if so.

Not to rain on your parade, but it really won't be.
Apart from centralizing some multiplayer processes (which is a staple of dedicated/official servers already), it won't really improve much of anything.

Atmos Duality:
[Not to rain on your parade, but it really won't be.
Apart from centralizing some multiplayer processes (which is a staple of dedicated/official servers already), it won't really improve much of anything.

Oh I don't know, it really depends on a game by game basis. As you say it all depends on how time sensitive the calculation is and how complex it is in the first place, but there's room for use in slower games and the non-game services that most consoles run in the background anyway.

Deploying dedicated servers on a cloud platform, particularly when they can piggyback it on Azures infrastructure is a good way to load balance.

If I remember correctly, didn't F.E.A.R have really stupid AI? Like so stupid that their programming was literally "RUN AT PLAYER. SHOOT." A developer came back and said the only reason the enemies flanked the player and seemed so smart was the level designed was aimed towards forcing the AI into those routes and getting players to react in a way that would help this.

ForumSafari:

Oh I don't know, it really depends on a game by game basis. As you say it all depends on how time sensitive the calculation is and how complex it is in the first place, but there's room for use in slower games and the non-game services that most consoles run in the background anyway.

Deploying dedicated servers on a cloud platform, particularly when they can piggyback it on Azures infrastructure is a good way to load balance.

I agree with that technically in principle, but in practice, it's just that, overwhelmingly, the mainstream console games trend towards "real-time" rendering.

More accurately, those kinds of video games are calculated on a per-frame basis (rendering, by definition, is per-frame unless it's a totally static image), meaning that to find the "timeliness" of a game, or window that it has to crunch everything, you take your Frame-Per-Second, and solve for seconds.

So, if your game runs at, say, 60FPS, all crunching must be done by 1/60 second intervals (0.0167 seconds, or 16.7 milliseconds)
Meaning your latency to the Cloud MUST BE less than 16.7ms to provide any real benefit.

(For 25FPS, the time is a much more generous 40ms, but there's a big push for higher frame rates in gaming.)

Even in an optimal scenario where you can save more processing to put towards video by "unloading" non-video processes to the Cloud, it still won't amount to much of an increase in performance OR fidelity.
Because increasing graphical fidelity now requires INCREDIBLE increases in processing power to be noticeable.

Freeing up 20-40% of local processing resources is nowhere near enough.
Hell, TRIPLING the processing resources available wouldn't be enough today.

image

So, the only benefits you're left with are those akin to dedicated servers.

Which is a nice perk, but a potentially dangerous one too, since the same Cloud-process-loading feature can be very easily modified to render a game dependent on the Cloud making it "Always Online-required".

Atmos Duality:
So, if your game runs at, say, 60FPS, all crunching must be done by 1/60 second intervals (0.0167 seconds, or 16.7 milliseconds)
Meaning your latency to the Cloud MUST BE less than 16.7ms to provide any real benefit.

Only if the client side AI is literally brain dead without the cloud. If it can follow basic instructions, such as go here, attack that guy etc, then you can have a high level AI in the cloud that updates the client AI every half second or so. On that sort of timescale, latency is not a big issue.

Atmos Duality:
Which is a nice perk, but a potentially dangerous one too, since the same Cloud-process-loading feature can be very easily modified to render a game dependent on the Cloud making it "Always Online-required".

There is an upside to this. If a company like EA realises it can protect its' AI code by not actually giving it to their customers, they might, just might, throw a decent sack of cash at advancing the field of game AI so they can justify it. Whether the AI they develop will actually need cloud processing is largely down to random chance, but they would hopefully realise the importance of the AI being good.

Kahani:
Somewhat ironic that your post was about people not actually realising what the AI does, since Civilisation (the latest one at least, I can't remember exactly how previous versions worked) is one of the ones that actually doesn't cheat

Actually, I got that example from a lecture by Soren Johnson.

Bad Jim:

Only if the client side AI is literally brain dead without the cloud. If it can follow basic instructions, such as go here, attack that guy etc, then you can have a high level AI in the cloud that updates the client AI every half second or so. On that sort of timescale, latency is not a big issue.

You can, but what's the bloody point when AI barely consumes any local processing to begin with?
You aren't really saving anything by hosting it on the cloud, and what you do save could never amount to anything significant past that.

If anything, hosting it on the cloud just creates another potential point of failure.

There is an upside to this. If a company like EA realises it can protect its' AI code by not actually giving it to their customers, they might, just might, throw a decent sack of cash at advancing the field of game AI so they can justify it. Whether the AI they develop will actually need cloud processing is largely down to random chance, but they would hopefully realise the importance of the AI being good.

That's quite a stretch for an upside.

For one thing, broadband internet has all but eliminated AI as a development priority, because multiplayer-only is the future companies like EA are moving in. When you have a large population of human players; there's just no point to making AI controlled elements.

That, and I have never once heard the need to "protect" AI as being a motivator for...well, anything.
It's just a bizarre assertion to make.

dolgion:
A good topic to discuss. Sure, the combat AI routines are peanuts compared to the requirements of graphics processing but when I think of "Super AI", I'm thinking of the limitless potential that the field has yet untapped.

Think of an open world game where every single NPC has dynamic interactions with others AND the player, where their actions are determined on the fly depending on ever changing parameters. You'll have to write a human simulation, much in the vein of The Sims. When you increase the complexity and reactivity, the computational needs would surely skyrocket no?

An example:

Bob is a farmer somewhere in Tamriel. He has 2 daughters in their teens and a troublesome son who's running with the wrong crowd. He worries about whom he should marry his daughters to and how he should straighten out his son. After all, last month he had to bail him out from the local guard after his son tried to steal a noble's purse as some sort of rite of passage at the thieves guild. Thing is, the bail cost him his savings and then some. He's now in debt and all he can hope is a really good harvesting season. One of his daughters is in love with a local banker's assistant, but Bob isn't sure if the boy has the smarts to make a career and provide for his daughter should they marry. With so many worries, and the death of his wife some 6 years ago, he's been starting to have a drink before bed just so he can fall asleep better.

This is just One character among thousands in this imaginary game. You'd need to simulate a working economy, personality traits and relationships between characters and more. I wonder how impossible this kind of thing would be in terms of processing power, let alone to design algorithms for.

And even then you would have the challenge of communicating it to the player. It is not that hard to imagine that Bob may already have a whole hidden life in today's Skyrim but the player would not notice it because the developers could not think of a good way to let Bob communicate it to the player. The game is voice acted, so by that standard they would have to invent a voice synthesizer and a way to translate Bob's plans into words - a massive feat on its own - just to get the player to notice all the intricacies.

Shamus, you must have never played ArmA.

The only reason AI doesn't usually require lots of system resources is because gaming AI is so incredibly limited and dumbed-down. The minute you try to do something ambitious with 'combat AI' (like in ArmA, where the poor little guys have to do EVERYTHING the player can do in a massive persistent environment), system resource limitations begin to loom large. Really large.

ArmA is a PC-game, so the point about new console generations stands, but it's still one game that has already run up against a wall, as the use of headless client (MP communities buying a separate copy of the game to run the AI independent of the server) shows.

ArmA could still be a game with AI that resembles FEAR or CoD (it would probably be a lot more reliable and also destroy the dynamic, sandbox nature of the game), but still, AI is only cheap because developers set their sights low in view of the its great complexity.

Put 100 soldiers in a field, and fire a rifle at them from concealment. To act like humans, all 100 men have to drop to the ground, start looking around for dozens of possible places to take cover, while staying somewhat in formation, while preserving the chain of command. And then they have to start looking for the sniper by evaluating the sound and scanning the hundreds of possible objects the sniper could be hiding behind. And maybe doing some recon by fire and selecting random targets to spray at. And throw smoke? But who throws and where?

And that's the ideal AI. ArmA only does half of that. And don't tell me that that's only a few CPU cycles. It's literally thousands of LoS checks. Like every hitscan bullet fired in Halo ever.

I have a soft spot in my heart for the enemy player A.I. in the old Commodore 64/Apple II era Spy vs. Spy game, for one simple and perhaps silly reason:

The game was a split-screen action/strategy game in which the Spies attempted to locate four MacGuffins and escape while setting various traps to kill the other Spy as they each attempted to maneuver around the rooms on the map on the same mission. Death was an inconvenience, taking the player out of the game for perhaps ten seconds while giving the other player a bit of free rein and possibly access to any MacGuffins the other player had already located.

One of the traps involved setting a bucket of "electrified" water over a door so that if the enemy Spy tried to open that door, it would fall on their head and immediately kill them.

When both Spies happened to run into each other in the same room, the action would condense down to one screen and the Spies had the option to try to bludgeon one another to death with clubs- an option that was certainly the slowest and most uncertain method of incapacitating one's opponent. As both players were running against a clock, this was usually best avoided.

Now, here's the thing: as this was 1980-something, both players were accessing all functions with a single eight-directional joystick and one button. This meant, among other things, that the controller function for opening a "northward" door (pressing up on the joystick and pressing the button) was more or less identical to the action for swinging a club upward (holding down the button and pressing upward). It was fairly easy for a savvy player to "trick" a human opponent into opening a trapped door and electrocuting themselves, when they were actually trying to swing their club upward.

...And I don't know why, but the computer could be "tricked" into the same error.

As Shamus notes, it's all too easy to create an AI that has advantages over the player: one who doesn't have the fog of war in a strategy game or always has free money; one who never misses in a shooter. It's harder to create one that plays convincingly like a human, including making human-style errors. But I really have to give credit when a programmer creates an "AI" player that feels like they're playing using the same interface as the player, right down to the tics of the controller. I saw it in that game back in 1980-something, and I'm not entirely certain I've seen a good example since then.

Atmos Duality:

Freeing up 20-40% of local processing resources is nowhere near enough.
Hell, TRIPLING the processing resources available wouldn't be enough today.

image

So, the only benefits you're left with are those akin to dedicated servers.

Which is a nice perk, but a potentially dangerous one too, since the same Cloud-process-loading feature can be very easily modified to render a game dependent on the Cloud making it "Always Online-required".

The problem with your example is that with a good normal map the 600 poly model gives the same graphical fidelity as the 60000 poly one, and is less resource intensive on modern GPU's than the 6000 poly model. And everyone uses normal maps these days. Basically, a new technology came along and totally sidelined everyone's assumptions.

I'm not saying that "the Cloud" will be that technology, particularly when it's just use as a marketing label for all the stuff that's already done server side. Just that your example has been overtaken by changes in technology, and is now misleading, which is a pity as it's simple and easily understood.

I wonder if the AI of alien isolation will be any good. It is what is going to make or break the game.

Andrew_C:

The problem with your example is that with a good normal map the 600 poly model gives the same graphical fidelity as the 60000 poly one, and is less resource intensive on modern GPU's than the 6000 poly model. And everyone uses normal maps these days. Basically, a new technology came along and totally sidelined everyone's assumptions.

*sigh*
Another great visual aide flushed right down the drain. Yeah, I forgot about bump mapping.
It's pretty great tech (especially for as old as it is), but it's not a disproof against my point; just that specific image.

Well, now I have to find a way to demonstrate the ever-ballooning state of texture maps, static meshes and a gaggle of "post-processing" goodies that are being crammed into rendering these days. All of which are FAR more resource intensive than any AI.

I always thought all of the AI claims were a bunch of bs. Good to know I had the right idea.
But who knows? Maybe these developers will truly surprise us and come up with something amazing not possible before.

 Pages PREV 1 2

Reply to Thread

Posting on this forum is disabled.