UFO:Alien Invasion

Development => Coding => Topic started by: nonickch on June 28, 2010, 03:44:53 pm

Title: the AI discussion
Post by: nonickch on June 28, 2010, 03:44:53 pm
UPDATES:
1/July/2010: Updated diagrams (minor fixes and refactoring) and ArgoUML project file.

Morning everyone.

As you can guess by my postcount, I'm new in here.
So I decided to take a peek in this project. Models I can't do, I've got the artistic touch of a baboon, so what's left?
Coding!  ;)

So, yesterday I grabbed my cold pizza leftovers and started scouring through the code. First place I went and checked was the AI. There were quite a few things that I found annoying at my playtime (24-hr sessions over a week on 2.3):
a) Aliens stupidly home in towards the closest phalanx member. They will promptly get stuck in a corner only 2-3 steps away from plasmablade-stabbing my squaddie in the face. This gets worse when there are aliens in their spaceship where they will clump on the other side of the hull where a squaddie is taking cover.
b) The AI is really predictable. Uncover, shoot, cover, endturn.
c) I have never been on the receiving end of the armageddon called a plasma grenade. No wonder I tend to place them in southpark-esque manpiles for maximum firepower. Only once have I been stabbed with a plasblade (thrown too!) by an alien, which promptly made me restart the mission. Why doesn't this happen more often? Even though carrying blades/nades is good for the loot, I doubt the aliens do it out of courtesy.
e) Let's face it, with the alien weapons, stats, positioning and perma-infra you should get have a REALLY hard time defeating any engagement, let alone the starting ones. Instead, this feels like a breeze every single time.

So, for my understanding considerations and for sparking a discussion on the AI, I made some half-*cough* diagrams. Bear in mind It's been ages since I've done any kind of standardized diagram, so I've just drawn circles and spammed chars all over the place. Bear with me and do fix the style if you'd like (here's the ArgoUML project file (http://www.mediafire.com/download.php?nb2lmzemmzo)).

For starters, a brief overview and a disclaimer:
I've only reviewed the code for about 10 hours, I'm bound to be mistaken and have quite a few misconceptions. Please correct me whenever possible. I will probably be updating this first post if this thing ever catches up.
The AI is a score maximization algorithm that performs a (kinda) depth-first complete search over the tree of possible actions. There's a catch here ofc, it doesn't calculate the entire set of possible gamestates (that'd be a whole lot larger than the chess tree), it just takes turns for each of it's actors and creates/searches/executes a tree for all possible moves. Given that the outcome of a shot is non-deterministic and we can't know it's result prior to taking it, the scoring algorithm evaluates damage dealt with some human-like metrics: 'is it worth it?' etc. A whole series of other metrics like 'did I get to hide well enough?' are also added ontop of the possible damage metric to give us the final score for each of the tree's leafs.
It's also worth to mention that in no point does the AI cheat, it plays like a normal player, sees what you see. The AI cheats in the visibility sector, it will know if it can see a target from a spot before it goes there. Another minor exception to this that I've noticed is to not consider moving to a spot where one of the enemy (and yet unseen) actors stand on.

The main AI-centric file (the C AI, not the LUA) which I've examined is the src/game/g_ai.c one. Most of the interesting things in there are included in the following diagrams, with some notices about them:
* overall flow of AI
* Missing trivial/'inconsequential' things like 'exclude occupied spaces when calculating all possible move destinations'
* Only investigating sane aliens, crazed/angered aliens and civvies are ignored
* I didn't look into the gi object area, so pathing is not in there. (import_t object, I'm guessing it comes from the .bsp maps)
* You can spot the keyword 'IMMEDIATE' in some parts of the AI. This is done to differentiate between the thinking part and the execution part of the algorithm. Usually this wouldn't have been an issue but it does look like someone patched the code and ninja'ed in a hack or two (see AI_ActorThink early steps)
* You can spot some obvious function names in the diagram, it's there for your ease of referring to the code really fast.
* You will see me often pointing some 'shortcomings' whose implementation would be impossible or hard to do. I'm just theorycrafting in most cases and I do like to nitpick everything.
* I've split the AI into 3 diagrams (and more should be in there). They are presented in the sequence of execution and shouldn't confuse you too much.


Let's start with the main AI loop, AI_ActorThink.
When it's the AI's turn, this should (didn't check it and I have no idea what serverFrames are) be executed once for every actor in a particular ordering.
The diagram for this is quite simple:
(http://img718.imageshack.us/img718/6453/aiactorthink.png) (http://img718.imageshack.us/i/aiactorthink.png/)

Does everyone understand my diagram style? Good, neither do I from time to time.
The main reason of existence for this function appears to be the execution of the highest-scored action found (after requesting one).
I did mention before some hacks in the code. This seems to be the place. See the reloading and checking hands part? That shouldn't (normally) be there, it should part of the searching algorithm. We do flatten our tree quite a bit and we can end up doing silly things like:
-> If bestAction wants us to run far FAR away, reloading/equipping a gun first is a  bad idea jeans. (http://www.youtube.com/watch?v=g_2YWVW4gzE)
-> If bestaction wants us to stab someone, equipping that pistol from the holster instead of that knife from the backpack is counter-productive.
Also notice that we are executing only the shooting part of the best action here, prep does the walking for us. Oh well, fine with me :)
Now, some theorycrafting: Do you notice that each actor moves&shoots in sequence? That's limiting the tactical abilities quite a bit. I think this is quite obvious in the landing standoffs where you and the aliens are facing each other. Instead of thinning out your numbers before trying to get cover, each of the aliens shits it's pants individually and runs for cover. It would be nice to at least get a scheme for aliens to spend some of their TU's to try and see if the situation improves (remember, non-deterministic, so we can't/shouldn't make one common huge tree for all the actors).

Right, onto the AI_PrepBestAction part that's already been referenced:
This part of the code calculates all the possible moving positions for the actor and requests the scoring of what can happen from that position. It finds the best (by score) spot to move to with it's resulting actionSet (like move there, shoot that, then move over there) and executes only the moving part.
(http://img130.imageshack.us/img130/8013/aiprepbestaction.png) (http://img130.imageshack.us/i/aiprepbestaction.png/)
Pretty clear-cut I believe.
Notice the dual-path of scoring. I believe the right one is the traditional one and the left one was added for supporting mission targets. Do we even have mission targets in the game? At least we got support for those ;)
I assume this is the part that will make the aliens dash for your alien containment/command center of your base once such a thing is in. Again, there shouldn't be a branch here, target evaluation should be part of the scoring code, but it's not bad enough to warrant a change (yet). This is the stuff that spaghetti code is made of :P

Well, that was pretty easy up to here, now for the hard part.

Welcome to the AI_FighterCalcBestAction
This function is called once for every possible spot each actor can move to. At the search tree we're at the last level of nodes before the leaves start. The whole goal of this function is to find what's best to do from a position. This usually entails shooting someone, but it may end up being just a nice hiding spot. All of the action completion and scoring happens here aswell.
(http://img197.imageshack.us/img197/6022/aifightercalcbestaction.png) (http://img197.imageshack.us/i/aifightercalcbestaction.png/)
This is where it gets kinda complicated.
Don't get confused at the 2nd level of cute baloons. I didn't know how to depict this properly without creating a huge diagram. I'm only trying to depict a set of nested while's there. It essentially says foreach_possible_target{foreach_possibly_firestyle{calculate_damage}} to find the maximum damage possible.
So, we can see that the basis and first hint at the scoring system is the possible damage dealt. After applying some of the bonii/malus to the damage expected, we have a semi-complete action (up to now it only had a moving part from prepBestAction) and it's basic scoring. I say semi-complete because we may end up adding more moving to it by the end of this function.
After this we start adding/subtracting scores according to some principles we make our fluffy little aliens like: hide, hide, hide and then try to close in the distance. Oh, and hide some more. The bonuses to the current score for things like hiding are #def's in the file and they start at about line 39 of g_ai.c. I have to notice here that there is a considerable impact on the AI by using absolute dmg balues as part of the score. At least this should (and I think it does in practice) make early aliens a bit timid, but when they get nice weapons they get a bit more aggressive. Then again if you add more armour it makes the pistol-wielders cry like babies :}
I got kinda lost in the fire calculations part. Especially with the LEFT/RIGHT & firing types. I might be missing something important in there (it's only abstract in the diagrams). There's a good chance the 'aliens try to shoot through walls' bug is in there because of the visChecked var. Not sure. Notable omission (of what should be another diagram): AI_HideNeeded
Also note the huge values of the bonuses like hide. 60! I'd make some guesses here on the behaviour this cause, but I'd refrain since it downed upon me that the search space is quite large for me to digest at this time. At least I would suggest some toying around with those #defs to see if we can get some nicer behaviour.

Do you remember my gripe about the aliens sticking to the other side of the hull of the ship when my squaddie is right next to it? I believe we can trace this behaviour down to the closing distance scoring. Notice that I do mention in the diagram that the distance used is the Vector, not the pathing distance. Combine this with the fact that the AI cannot possibly plan for more than one turn (we only ever check where we can go only this turn) and the scoring ends maximizing behind the wall because it is close by absolute distance and no other reachable spots produce anything with a better score (we're already hidden and can't reach to shoot someone). Obvious solution to this is to factor in the path distance aswell. Replacing vector with just path *might* be counter-productive since it usually also means firing distance when no walls are involced. This is probably worth checking in the immediate future.
A nice testing idea: Try expanding the searchspace of the PrepBestAction to spots reachable by twice the tu's of the unit (prefferably currentTU's+maxTU's since we might have reloaded. See why we shouldn't add actions outside the evaluator?). Ofc since the code is written for 1-move, we might have to double-triple check that nowhere does the AI try to move the entire length if tha'ts exceeding max TU's (does the server even allow us to do that?). That should at least broaden the horizon of our lovely critters and may bring an end to the hull-hugging and breath some air of longer-term planning to them. It could also end up unleashing the hell of mordor upon unsuspecting kittens across your window, you never know what will actually happen in such situations unless you try it. Performance might be an issue here aswell, we're not doubling the size of the search space, we're doing much more (running complexity calcs right now is out of the question, I need to get over with this post).

Also, look for the place where the actor needs to hide from danger. If I'm reading this right there's an OR where an AND should be. Am I reading this right? Input needed

It's generally great to hide/take cover, but I think this is where the 'aliens stick behind corners'. Like when raiding a house with a closecombat/flamer guy. If you stay like ~15 tu's away from him just around a corner next to a door, he will opt for closing in (CLOSE_IN bonus) and still staying out of your direct sight (HIDE bonus) instead of trying to get to a safer position or take a chunk of of your HP (better yet, introduce me to mr. plasblade). Do you know what spot seems to produce that max score? Yup, it's the other side of that wall, 3 steps away from me :(


OVERALL
It's not as bad as I thought it would be from my experience. Maybe we can get some quick and painless boost by expanding the search tree by extra moves-worth of tu's. Also, finetuning the score-bonus #def's? That's a lot of tinkering requiring recompiles and game restarts :(
Notice the full-tree searching (with limited performance cutoffs) that exponentially grows with all_moves->all_targets->all_shootTypes. This tree can quickly grow enormous and will probably create issues if teamsizes/numbers are ever scaled (hmmm, can I put like 10 AI teams+me in a map and watch the lagfest?). Then again for our limited usual cases this guarantees the best solution given our metrics.
There is absolutely NO battleplan and zero actor cooperation. We could do something like baiting without too much hassle? That's like a mini-battleplan. "Hey look, there's a smartass with a firebreather that can candle me, how about I bait him towards me past the corner where I'll the reaction-fire welcomming committee is waiting ;) Hmm, that would require cooperation, pretty hard to do this in the current design. Maybe just coming to the aid of another cornered alien?

I hope this thread gets some discussion started about the AI, and who knows, some patching and a general housecleaning if that's not enough.

So here's some questions-notes I didn't compile into the wall-o-text but don't want to skip:
a) What's going on with LUA? Is it coming in sometime in the future as the standard higher-function AI handler? I heard something about a slowdown in pathing, what's the issue? Exporting the base AI functions to lua and working from there would tremendously help any development/testing. I don't even know if this AI is mirrored in the LUA so I can just opt it in at compiletime
b) Game balance is done with AI in mind. If the AI suddenly gets a whole lot nastier, how quick can the rest rebalance? (alien numbers/stats/weapons/numbers/introduction to missions etc)
d) Getting ufoai as coursework = influx of geeks. Given that the primitives to use are quite straightforward, I bet lecturers may consider using ufoai as a basis of teaching via real examples. You can't believe how crappy and boring enviroments are used even these days at the universities.
e) Actor limitations are hardcoded and not configurable :( (eg bloodspiders can't pickup things). That's going to hamper development quite a bit. Maybe we should be grabbing those from the creature defs (which I have no idea where they reside)
f) Official Performance limitations? X/sec/actor on Y cpu? How fat can we make our search tree and still get away with it?
z) Holy (academic) grail/solution for the AI: Machine learning (yea, I can kick your ass after I learn how you play) coupled with agent (ewh, I hate this term) system of planning and coordinating. Can I/will I try to do this?
HELL NO!

I may not have a life atm, but damn, I'm trying to get one, not permanently snuff it out.
Middle ground anyone? Only patching? Rewriting parts/whole of the code?

Oh well, this is what happens when the game locks up and eats my X a couple of times.
This is your punishment: Read this wall-o-text because I had nothing better to do  ::)
Title: Re: the AI discussion
Post by: Mattn on June 28, 2010, 06:55:43 pm
wow - that sounds like it's not the first time you look into ai coding ;)

i've commited your flowcharts to the svn - they are quite handy.

the lua ai should (someday in the future) replace the c-ai - but the c-ai is currently still more advanced (even if you find a lot of flaws in it).

i'm sorry, but i don't have 9 hours to write an answer ;) - and don't remember all the questions you asked in your post so if you would like to know something particular, please ask - i will try to answer some stuff.

the server will allow you to check above your TU boundaries - but there is a max step width in the server to limit the cpu time that would be needed to calculate this (see Grid_MoveCalc).

about "Actor limitations are hardcoded and not configurable": this is (or should be) in teamDef_t and in base/ufos/team_*.ufo

"Game balance is done with AI in mind" - i think if the AI would get smarter that would not have a direct impact on the weapon balance or something like that - "only" on e.g. the map layout - but this should be fast to fix in cases where this would be needed.

"Getting ufoai as coursework" - that would be quite cool - but to be honest i don't have an idea how to push something like that.

if you would like to improve or tweak the ai and would send patches we would include them with pleasure.

hope to hear more from your side

regards
martin
Title: Re: the AI discussion
Post by: dodon on June 28, 2010, 10:30:05 pm
The AI is quite weak and we should improve it.
So here are some random thoughts


I have not noticed MedPacs in your description. Are they missing?


a) Aliens stupidly home in towards the closest phalanx member. They will promptly get stuck in a corner only 2-3 steps away from plasmablade-stabbing my squaddie in the face. This gets worse when there are aliens in their spaceship where they will clump on the other side of the hull where a squaddie is taking cover.
After I noticed this, my successrate went up :)


That's limiting the tactical abilities quite a bit. I think this is quite obvious in the landing standoffs where you and the aliens are facing each other. Instead of thinning out your numbers before trying to get cover, each of the aliens shits it's pants individually and runs for cover.
I think the aliens go for the civilians (easy to hit/kill (no armor) + good place to hide(won't fight back if not killed))
So this might need some rebalancing


Also note the huge values of the bonuses like hide. 60!
I would judge the hidingplace by these questions:


Do you remember my gripe about the aliens sticking to the other side of the hull of the ship when my squaddie is right next to it? I believe we can trace this behaviour down to the closing distance scoring. Notice that I do mention in the diagram that the distance used is the Vector, not the pathing distance. Combine this with the fact that the AI cannot possibly plan for more than one turn (we only ever check where we can go only this turn) and the scoring ends maximizing behind the wall because it is close by absolute distance and no other reachable spots produce anything with a better score (we're already hidden and can't reach to shoot someone). Obvious solution to this is to factor in the path distance aswell. Replacing vector with just path *might* be counter-productive since it usually also means firing distance when no walls are involced. This is probably worth checking in the immediate future.
I would describe this action as go near a place where I can use my wappons in the next turn. And different wappons call for different actions:


If I had do improve the AI I'd
and then check the new behaviour.
Title: Re: the AI discussion
Post by: Marte on June 29, 2010, 09:12:49 am
nonickch = 1 titanic post to read, interesting too!
I like idea of pluggable (or something else) AI into game, could be really cool and join new people to project
Title: Re: the AI discussion
Post by: nonickch on June 29, 2010, 02:39:17 pm
Since people tend to skim the first few lines, some requests first:
* Right now I'm just fiddling with the #defs in g_ai.h, (the GUETE ones). Anyone can whip up a quick way so I could tinker with them from something like the console? I see some set commands in there, so I guess if I could use that functionality and replace all the uses of the GUETE defs in the g_ai.c? Things would go a lot faster as atm I'm just recompiling for each tested value :/
Help appreciated with some playtesting. I'm always on irc and typing my name there bleeps my client (I'm on GMT+2 tz)

the server will allow you to check above your TU boundaries - but there is a max step width in the server to limit the cpu time that would be needed to calculate this (see Grid_MoveCalc).
What do you mean 'check'? If I accidentally ask the server to move the alien past it's available TU's, is it going to let me? Spit an error? Silently discard?
I bet I'm going to find out sooner than later, but just asking in case you remember off the top of your head
Also, you mentioned a couple of neat startup tricks like +set developer 1 and +map in irc. Is there a wiki page/thread detailing those?
Since you did put the graphs/project in the contrib/, I'd rather go make it prettier and more formal. I saw the LUA civvie graph there which I'll use as an example (oh, and if you look closely at that, the two levels of the graph are identical apart from a child node. So you can compact them by merging them and just sticking a conditional at the differentiating point. Remember my nitpicking comment? ;))

dodon:
Yes, medipacks are nowhere to be seen in the AI. And I have never seen medipacks as part of the loot, so I guess the aliens never carry them.
The civilian targets are already penalized by something like *0.6. I guess that's still not enough as I have seen a lot of cases where the aliens shoot them first even though I'm clearly visible and shootable. That will be part of the value tinkering.
The 'next turn' metrics you propose can only be implemented if we expand the search tree to another turn. That's one of the things to do after playing around with the #def's, as this will probably be not easy as it sounds. I do remember a comment about limited searching and the need to rewrite other parts of the code. Can be hairy.
Indirect fire seems to be totally out of the picture in the current AI. Grenades are a todo in a place where it shouldn't be that hard, but that's only for visible targets. Lobbing grenades above fences is not the same though since the AI does not consider targets that are not visible to the current actor. Finally, there's the part about calculating splash damage, since the hit percentage of a nade is small, yet the damage cause is still way above significant. Given the TU's needed to fire alien weapons, and a working grnade tossing AI, I wouldn't be surprised if they would insist on chucking all their grenades at the first opportunity (I'd laugh so hard if this happens, half your squad gone on round one in the close-quarter spawns).
As for the 'aliens shoot through walls', I'll have to read a whole lot more (or at least make the console start raining msg's). I haven't been able to reliable reproduce it yet, so fidling with something and then play 5 hours waiting for it to happen is not that feasible. That, and I can't really say I have a solid theory on why that happens. Maybe it's the same as the 'bug' the player gets: The target line is green (no obstacles), the hit % is good, yet all the shots strike an obstacle.
Refactoring the code and moving the snippets from actorThink and missionTargets from PrepBestAction shouldn't be that hard. It's probably the next thing to do since I do like to find things where they should be.

Marte:
The pluggable part is all about the LUA. I'm ambivalent at this point for that issue since I have absolutely no experience with that language. I also heard that someone recently implemented that, is he still around?
It's going to be hard(er) for me to get in touch with LUA than to get back to my olde C days. Then again if I get to use it, things will definitely go a whole lot faster. Plus it will make people much more interested in playing with it as it will be easier for them.
I do believe the middle road is much better. The primitives exposed to LUA by the C code could expand to encapsulate the more process-heavy, long-winded and least likely to change things like the damage calculations. Then all that's left to the LUA script is to deal with higher-level intelligence.

Right, my compile is done (why the hell did it recompress all the pk3's?). Going back to playing.

edit: bingo. Halved the hide bonus and the little critters started getting way more aggressive, at least in the spaceship scenario where most of them marched out and started shooting stuff. It was easier to shoot them, only after they shot to pieces every visible civvie in sight from the door. Clearly there's a lot to be gained here, time to go nerf the civvie shooting score. In the end, it will boil down to a collective descision on what the balance should be.
Hmm, can anyone say distinctive alien behaviours? If the scoring #defs are moved to the critter stats you can get per alien, or even better - per race, behaviours! Say tammies are more careful/hide-prone, where ortnoks tend to be more gun-ho. Maybe use the moral here aswell. Shot-up ortnoks are not that prone to leeroying.
Mmmmm, I smell nice things  ;D
Title: Re: the AI discussion
Post by: Hertzila on June 29, 2010, 04:59:31 pm
Indirect fire seems to be totally out of the picture in the current AI. Grenades are a todo in a place where it shouldn't be that hard, but that's only for visible targets. Lobbing grenades above fences is not the same though since the AI does not consider targets that are not visible to the current actor. Finally, there's the part about calculating splash damage, since the hit percentage of a nade is small, yet the damage cause is still way above significant. Given the TU's needed to fire alien weapons, and a working grnade tossing AI, I wouldn't be surprised if they would insist on chucking all their grenades at the first opportunity (I'd laugh so hard if this happens, half your squad gone on round one in the close-quarter spawns).

This nearly happened to me once: 6 aliens, all who had only grenades as weapons (one had kerrblade but that's it). I had a laserguy on RF some distance away from the door, crouching. In their turn, every single one of them just walked out of the door and lobbed a grenade at that guy. I had my men behind the corner too so one bad bounce and nearly all my agents would be dead. Luckily, since they are just as good throwers as your agents (that is, even worse than an armless guy), the grenades missed by a good margin, no one turned up dead and only one guy suffered damage. My next turn was storming their "safehouse" where all of them were neatly packed in one corner.
Title: Re: the AI discussion
Post by: Mattn on June 29, 2010, 06:55:33 pm
Since people tend to skim the first few lines, some requests first:
* Right now I'm just fiddling with the #defs in g_ai.h, (the GUETE ones). Anyone can whip up a quick way so I could tinker with them from something like the console? I see some set commands in there, so I guess if I could use that functionality and replace all the uses of the GUETE defs in the g_ai.c? Things would go a lot faster as atm I'm just recompiling for each tested value :/
Help appreciated with some playtesting. I'm always on irc and typing my name there bleeps my client (I'm on GMT+2 tz)

make cvars out of them.
Title: Re: the AI discussion
Post by: arakis on June 29, 2010, 10:35:59 pm
if this is a squad based game, how come the ai  is hamperd with every alien is for himself? I think it should be a priority to rehaul the ai so it thinks like a player, one entety controling all aliens and geting inputs from all aliens.  MO only, should not be  taken to seriusy
Title: Re: the AI discussion
Post by: Duke on June 30, 2010, 01:35:25 am
@nonickch:
First off, welcome to these forums :)

Secondly, kudos for your (almost)precise analysis of what the AI code does. It took me much longer to figure that out. And thanks for writing it down ;)

1. Would you mind to transform your first post into a wiki article about the 'as-is AI' ? Somewhere in the vicinity of the BSP and pathfinding articles would be suitable imho.

2. Fiddling with the GUETE_* values is fine. I bet you can heavily influence the AI behaviour. But imho those values should NOT be made cvars unless restricted to debug mode (think: bug reports!).

3. Imho LUA is NOT the way to go yet. AI mostly has a performance problem. LUA will certainly not speed up AI.

4. We should define a limit for the duration of the alien turn on a contemporary machine. Afaik this has not been done yet. I'd suggest 2 min.

5. Having #4 in mind, a two-turn look-ahead is currently not really doable. btw to get the crew out of the harvester cockpit, you'd need a three- or four-turn look-ahead !

6. Sure enough I'd like to see a team-behaviour. Do you happen to know about any concepts for that ?

7. Always keep in mind that it doesn't help us to have a perfect (slow) AI that forces us to downgrade the firepower of the aliens. What we want is playabilty without too much NS (=natural stupidity, pun intended).

Imho your analysis of the AI code justifies immediate commit access to the svn, but that's up to Mattn (the rule is: send a patch that we accept).
Also let me express the wish to make a person with your skills a *permanent* member of the UFO dev team (provided you keep on discussing before committing).
I know AI is every programmer's darling, but there are several even more important places where your skill could help...
Title: Re: the AI discussion
Post by: Mattn on June 30, 2010, 06:48:12 am
@Duke: Keep in mind that the game frame (and thus the ai frame) is threaded now - the ui would not interrupt when the ai round is running.
Title: Re: the AI discussion
Post by: bayo on June 30, 2010, 11:22:08 am
Quote
3. Imho LUA is NOT the way to go yet. AI mostly has a performance problem. LUA will certainly not speed up AI.

AI is something need tweek and rework.. All computation need resources like paththinding, bsp access... is/will only be a C binding. Scripting should be the way, and one day if we found the final AI, we can speed up it with C. Before that, working with C will only need a harder work without a sure result.

Quote
Keep in mind that the game frame (and thus the ai frame) is threaded now - the ui would not interrupt when the ai round is running.
Is everything is already protected by mutex? For UI at least the main game frame should be pretected avoiding thread accessing resources (for example if AI need to read/edit cvar or thing like that).
Title: Re: the AI discussion
Post by: nonickch on June 30, 2010, 04:41:04 pm
This nearly happened to me once: 6 aliens, all who had only grenades as weapons (one had kerrblade but that's it). I had a laserguy on RF some distance away from the door, crouching. In their turn, every single one of them just walked out of the door and lobbed a grenade at that guy. I had my men behind the corner too so one bad bounce and nearly all my agents would be dead. Luckily, since they are just as good throwers as your agents (that is, even worse than an armless guy), the grenades missed by a good margin, no one turned up dead and only one guy suffered damage. My next turn was storming their "safehouse" where all of them were neatly packed in one corner.

Hmm, interesting, I was under the impression that the AI couldn't lob grenades. That should teach me to read the code better. Which leaves me to go figure out what the nades @todo is all about.

make cvars out of them.
I got wind of that some time yesterday. I've now made a modded g_ai.c file that utilizes cvars. That's only for finetuning the AI and as duke says there's no reason/point to ever commiting that. This will be my main focus later on in the post, trying to gauge the actual effect of the GUETE values (and not just assume as I did up to now). Here's the modified file (http://www.mediafire.com/download.php?mgymftzozzy). Essentially FighterThink utilizes cvars (with the same name as the GUETE #defs, only in lowercase) with exactly the same names. Also there's an g_ai_kill (in AI_Tun) option that disables the AI so you can maneuver your guys at peace and test specific scenarios.

if this is a squad based game, how come the ai  is hamperd with every alien is for himself? I think it should be a priority to rehaul the ai so it thinks like a player, one entety controling all aliens and geting inputs from all aliens.  MO only, should not be  taken to seriusy
'thinks like a player' is not something even theoretically reached, given that we don't really know how any player thinks. What we try to do is make it look like it thinks like a player. Depending on the situation, this can be harder than others. Look at the chat Turing tests, there's not much AI behind that, just games with words and proper grammatical analysis. In a squad-based game like this, one would expect at least some form of plan discovery and execution. Baiting is the simplest of cases, fire&maneuver is another one. The problem here is that planning lands us spot-on in the machine learning parts of AI, where brainsuckers lie (I'll rumble about this a bit later). Even though fire&maneuver requires us to define and translate to gameterms higher semantics (squads, maneuver directions, and above all 'situational awareness' concepts like controlling parts of the map), baiting we could probably get away without all those stuff by adding some smart metrics to the algorithm (check enemy weapon, check if using it on us in his next turn reveals him to one of our teammembers with RF on). Although even in this simple case we'd now have to search trees for possible enemy moves and suddenly have some kind of cooperation between the 2 (or more) participants of the trap (you could just check if the executioner has finished his turn and has RF, but this will be unlikely to ever happen).

1. Would you mind to transform your first post into a wiki article about the 'as-is AI' ? Somewhere in the vicinity of the BSP and pathfinding articles would be suitable imho.
Since reformatting the AI diagrams is definitely on the todo list, I can do that at the same time. I just need to read up on the parts I really didn't understand so I don't have any gross mistakes (see part where I thought nades are never used).
Imho LUA is NOT the way to go yet. AI mostly has a performance problem. LUA will certainly not speed up AI.
Pretty much with you on that, but I do have to note down that LUA does not have to ever touch the processor-heavy parts like the trees. Simple primitives like the ones offered atm by the C interface would require the LUA to perform the entire tree searching, which is probably a no-no. But if instead we provide only/also higher-primitives like the entire FighterGetBestAction, then the LUA has limited cpu impact, but also limited AI impact aswell. Maybe sometime in the future it'd be nice to have that so lua-modders can do simple things like ignoring the result and/or executing it differently. Conga-line mod it is!

5. Having #4 in mind, a two-turn look-ahead is currently not really doable. btw to get the crew out of the harvester cockpit, you'd need a three- or four-turn look-ahead !
2 minutes is a whole lot of time, and kind of excessive. I don't think I've seen a turn take much more than 30 secs on my amd 2500+ cpu.
If we go after the extra lookahead and if it does prove to be very slow, there are quite a few things that we can do to drastically cut down on the searching time, depending on the concessions we make:
a) If we don't want to sacrifice ANY of the perfection a full-search algorithm provides, we can place strict searching cutoffs. A classic cutoff is the following: We are sitting in front of an enemy, we have a plasblade in our hands. We know that the damage we can deal is 100HP, plus the GUETE_KILL bonus of 30), and we can easily hide after the kill while still being close-ish to the enemies (hah, now that I think of it the AI probably considers in the distance-checking the enemy that's about to die. Minor issue tbh). So that's another ~70 pts amounting us to the ungodly amount of 200. At this point we should stop searching as this is the theoretical maximum. This is an extreme case ofc, but we can adjust for other cases: if 200 is the max, and we can achieve 150, we should never consider any tiles from which we cannot hide, since the max of those would be 140 tops. Now, if you order the checked tiles by path distance from the current spot (instead of foreach_h{foreach_y{foreach_x}}, you should be reaching some high-valued scores early enough (since you have more tu's to shoot&hide). This means that the cutoffs will kick in sooner.
b) Apply heuristic cutoffs. These are unproven 'human intuition' rules, which are already employed in the AI (see the damage calculations where there are cutoffs based on expected dmg and target visibility). An example could be: 'If your current max score is 100, do not consider tiles that are further away from the closest target'. Or: 'The most score deviation you can get from two adjacent tiles with the same visibility is 20, calculate the center tile and check the neighbors with the current max to see if it's actually worth calculating the full score'. Depending on the quality of the heuristics, you can easily prune the tree really fast without loosing many cases where the score would be higher. You could even order the nodes according to the likelihood they will produce better results and process them in some ad-hoc sequence while having a timer ticking that tells us when to stop. Iterate best-to-worst for the max possible results, other ordering for adjusting the AI strength according to a game setting and/or an alien intellingence stat. Holy crap, I really liked this idea.
c) Full tree searching is something rarely done. There' a whole lot of other algorithms for incompletely searching a tree that may produce 'good enough' results. A classic example is a greedy algorithm where it will simply try to maximize the score at each step and ignore all the rest of the tree's nodes (can search insanely large trees, has obvious shortcomings: in chess, it would never sacrifice even a pawn, eve if the next turn could produce a certain check-mate). I'd need to refresh my theory a bit more on this scenario, as the choice of algorithm is heavily dependent on the application and a bad pick will easily produce poor results. This case really is more or less like (b).

WARNING: Theorycrafting mostly
Finally you can (mostly) forget about the search tree and have a machine learning aided algorithms to be your daddy. My understanding on the applicability of these algorithms in our case is on a totally different level regarding how they discover the proposed action:
At the current tree-searching AI, we go bottom-up: given all the primitives, we try to combine them into a set that maximizes a score. The machine learning would go about doing the inverse, given a set of of integer-based(usually) semantics that describe the current situation (like 'control' and 'firepower'), find an abstract tactic to employ by discovering what happened in similar situations in other games. (one could say that the edict_t's could be enough, but we do need a limited amount of variables or risk getting absurd results). Like 'retreat to X', 'entrench to Y', 'flank Z' etc. Then you properly decompose these to the basic primitives (with search trees like the current one if you like).
You may ask yourself, 'if you assume we can easily decompose the tactics to primitives' why on earth can't I just skip the machine-learning algo's and find some other simple way of deciding which play to enforce? Well, I'd love to see the person that can do that. It may seem obvious to a human to observe the battle and decide which tactic is proper, but try to put that in code while generalizing enough to have a play for every possible gamestate. What the machine learning offers us here is the automatic discovery of this algorithm if given enough examples. Some machine learning algorithms can even produce a human-readable result, but from what I've seen they generally suck for complex situations such as this (rule-producing algo's).

So, off the top of my head, here's the things that would need be done in order to try (and I do mean try, nothing says this will guarantee better results than the current AI):
* First of all, we need metrics of 'success' in order to evaluate the result of a fight. Doesn't have to be formally defined if we employ strict supervised learning where some people go through the results of hundreds (or thousands?) of tests and assign a score to the result. Unsupervised metric could be simply the % of HP dmg we have caused to the enemy. Finally we can even have the algorithms learn-as-they-go (part supervised, part unsupervised), where they will augment their home-made knowledge with the results of every battle they take (this also leads to adaptation and countering a certain gamers playstyle). This last one is pretty ninja-looking stuff and does produce a totally different experience for every player.
Here's a set of notes that I'm just too bored to translate in the wall-o-text:
* Learning algo's need to define a situation in a small subset (say, max 12) numeric NON-CODEPENDENT (can't stress this enough) values that define the current battle situation. How on earth can you do that? I can imagine defining 'control' the % of space that is visible, 'firepower' the avg dmg you can apply on the visible tiles etc. Finding the proper set of descriptors can easily surpass in importance of the selection of a proper algorithm in terms of success. PCA can help a lot here (http://en.wikipedia.org/wiki/Principal_component_analysis).
* Depending on the selected algorithm, it would be helpful to have a simulator for gaming on it's own to gain experience. I guess it's possible if you make the AI to talk to a stub instead of the server.
* Needs a large (the larger the better, say well over a 100) of multiplayer games (maybe not entire games, but just a few turns) where the tactics are employed (and properly annotated somehow) by humans so it can actually get a starting 'knowledge' from somewhere. This could instead be just playing against the AI yourself and then evaluating. You could even ask the general public to do that, although I'm guessing the results will not be that reliable because of the different skill of the players.
* Some of these algo's are lightning fast (really, next to 0 execution time, see SVM's), but those are the ones that require most of the work done in the learning part.
* Since we're dealing with plans and not individual moves, we're transcending the confines of a single, or even, a couple of moves. The decisions are made in a way that are more likely to provide us a favorable outcome for the entire battle, not just a particular move. Every time the battle changes (every shot, every new visibility etc), we rethink our strategy and keep on going.
*The list of algorithms and their modded counterparts is quite extensive, and can range from a simple&straightforward Bayes-theorem application to dark-magic neural networks (which are silly-easy to code, yet picking the correct parameters seems more like a game of chance). If these last few paragraphs gave you a hard-on, try reading some wikipedia (http://en.wikipedia.org/wiki/Machine_learning) and then you can go from there.
Steven Fry is more likely to get out of suspended animation before I can manage to (convincingly) solo-code these things in my spare time.
Title: Re: the AI discussion
Post by: nonickch on June 30, 2010, 04:41:39 pm
Next up: actually doing some work rather than babbling: Observing the behaviour of aliens according to the GUETE_* #defs
For these, I employed the modified g_ai.c file attached at the start of this thread, and start playing with the numbers.
One of the things to notice, is that you cannot really assign a distinct GUETE value to a type of 'behaviour'. This is a direct result from the fact that the values are adding/multiplying ontop of each other, so the actual behaviour is the sum of all the values. Some initial testing verified this, so instead I decided to try and get some general idea about what happens for extreme values of each of the defs. So I keep the standard values for all but one in each test. Then the next step is to find a 'best' value for each of them and combine them.
Then re-optimize each value with that new set of starting values etc,etc (which can be a work in progress forever).
The list of GUETE defs toyed with are:
GUETE_HIDE: default 60, the bonus(additive) score for being hidden at the end of the move. Double the bonus if there was no target.
GUETE_CLOSE_IN: default 20, the bonus(additive) score for closing in on any of the enemy soldiers. This is used as a 'don't dash to the enemy before shooting him' (bonus=GUETE_CLOSE_IN-move) and also as a bonus for ending up closer to the enemy soldiers at the end of your turn: score+= guete_close_in->integer * (1.0 - minDist / CLOSE_IN_DIST) (absolute distance as in 'it doesn't matter if it's 1000 steps away in pathing distance'. Also AI cheats here as it checks all entities, not just visible ones). close-in dist is set to 1200 in the defs and I didn't really touch that, minDist is the distance from the closest opponent.
GUETE_KILL: default 30, the bonus (additive)  score for expecting to kill an entity with the shot you are considering to make.
GUETE_RANDOM: default 10, the randomizing bonus to an action. This is set to zero on all the tests in order to avoid skewed results.
GUETE_CIV_FACTOR: default 0.25 (or 25 in the cvar version), the (multiplicative) malus for shooting a civilian. This nerfs the base damage score because we're shooting a civilian.
even though I've cvar'ed other GUETE things, they are mostly about the civvies and the LUA AI, so ignore them.
I will be using 4 thresholds on each test: -1000, 0, default, 1000. This should tell us what behaviours are influenced from this tests. Observations are mainly a comparison between these 4 values.
My test map is the first one that pops-up in the skirmish: +africa small.
I observe the first 4 turns of the game without moving my units. Deviations from this only where otherwise mentioned

So, without further ado:
GUETE_HIDE (WARNING: if you're using trunk >=30780 or just dodon's patch in the following page, this observation goes out the window)
Right, default value is 60. This means that given a  choice of expecting to deal less than 60 dmg OR hiding, the alien will opt to hide. That's a pretty harsh value I believe.

default(60): Well, the classic behaviour you all know. Aliens almost always opt to end up hidden. Bloodspiders don't dare attack me in fear of staying visible at the end of the turn (which ends up with them perma-hiding in a bungalow next to me). After 4 turns only one of my exposed squaddies is dead and about 4 civvies. Quite depressing.
-1000: madness! The aliens simply refuse to stay hidden. In the first turn 7 of them become visible, the 8th just didn't have enough TU's and appears at the 2nd turn. Since they don't seem so preoccupied with hiding behind a basket, their itchy trigger fingers produce (in the first 4 turns) 4 dead civvies and 6 of my squad members. A 7th is running around rumbling and the 8th is slightly injured. And that's with the 3 bloodspiders dashing from the other side of the map and one alien having no weapons on him after turn 2.
0: Splatter. 6 civvies dead and my whole squad at turn 4 (at turn 3 all were mad). I'd attribute the better killratios to the fact that the -1000 was hampering the alien movement by forcing them to end their turn in visible spots. The aliens don't seem to have any prefference to staying hidden or not (makes sense, bonus no longer applies).
1000: Aliens really love their hiding, as expected. This ofc has a drastic effect on their damage. 4 civvies dead, only one of my soldiers slightly injured. The pop-shoot-hide occassions are next to non-existent (compared to the default value). I'm guessing the aliens were so afraid of getting seen that they couldn't even get to a pop-shoot-cover position.  Only in turns 3-4 one alien found a far-away cover that could produce a shooting spot next to it. Result: 2 hits with a pistol. All of the dead civvies are in areas I don't have vis on. Strange flukes: An alien is staying still and visible behind a fence, I guess he considers it hiding even though I have a 20% shot on him. I guess this implies hide may refer to (possibly exclusively) to cover. Need to revisit the code for that. Also: A bloodspider parked 2 tiles away from a soldier of mine (while visibleofc ) while dashing towards an empty bungalow next to the soldier (spider hid in there and staid there till end of test). I don't really know how this could've produce a max score, as it could've hidden on the other side of the bungalow. Maybe the hiding doesn't only refer to my team, but the civvies aswell (contrary to what I remember from the code).

Some general thoughts/notes:
*map design prolly affects aggressiveness penalty caused by the hiding bonus value.
*possibly nice heuristic because of the preference of aliens for pop-shoot-hide spots: explore such branches of the search tree first(prepBestAction checks these first against FighterBestAction), since they are likely to produce high-scores (so cutoffs prune the tree faster).
* aliens-shoot-through-walls happens a lot in this map, interesting.
* General observations: >0 values increasingly lower the lethality of the engagement. It does expose them to coutnerfire though. Values <0 increase the lethality, although in a much lower pace and are also counter-productive to the survivability of the alien. Stands to reason to ignore negative values.
*I suspect the best value is somewhere between 0 and 60, possibly around the 30's area. Needs real combat testing to really evaluate and it's really a matter of personal choice which battle style is preferred.

GUETE_CLOSE_IN
As mentioned before, this value is both used for:
* Take shorter paths to the shooting spot (as the comments mention: don't run around the target before stabbing him)
* Force the aliens to get closer to the enemies (in cheat-o-vision) so the engagement can happen sooner or later (also should avoid searching for forgotten aliens in rooms with little vision).

default value (20): At round 4, the aliens have killed 4 civilians, slightly injured one of my soldiers. Most of the aliens have only crossed about half of the map, cept for spiders which are now hiding in the closeby bungalow
<ah... defeatfear theme kicks in. Taking 5 mins to headbang to it>
-1000:
The Ballad of Brave Sir Robin
Bravely bold Sir Robin rode forth from Camelot.
He was not afraid to die, O brave Sir Robin!
He was not at all afraid to be killed in nasty ways,
Brave, brave, brave, brave Sir Robin!

He was not in the least bit scared to be mashed into a pulp,
Or to have his eyes gouged out, and his elbows broken;
To have his kneecaps split, and his body burned away;
And his limbs all hacked and mangled, brave Sir Robin!

His head smashed in and his heart cut out
And his liver removed and his bowels unplugged
And his nostrils raped and his bottom burned off
And his pen--
Right, 4 rounds of the aliens running away to the furthest possible spot, blocking each other en route. 0 shots fired, funny to behold. Hiding was not affected as much (couldn't see many of em at end of turns), but that has probably something to do with the fact that the furthest edge of the map had a my line of vision almost completely covered with bungalows. I'd definitely expect them to totally ignore cover (or anything else for that matter) in favor to running away. Another thing I noticed is that reaction fire has nothing to do with the AI, from the looks of it, all leftover TU's on aliens are automatically used for RF (either that or they can always RF, don't know which).
0: 5 civilians dead, 1 squad member. Interesting things to point out is that this happened from pretty much 2-3 aliens, all ranged. After disabling ai with g_ai_kill, I took a strawl and found that the rest of the aliens were hidden in spots that could produce any shots within walking distance (this causes the BestfighterAction to return 0 and force PrepBestAction to stand still). Blood spiders seem to be completely useless at this value as they will never enter a range where they could reach their target. The aliens would also refuse to take a diagonal step ahead which would produce the same hiding spot, yet a closer range to shoot from. Apparently the hit % bonus is not strong enough to force this on it's own (rounding issue? this was observed on a pistol wielder).
1000: 7 civvies dead and all of my squad on round 4. Hiding feature seems non-existent too. I did see aliens move away from my soldiers which probably means they were getting closer to a civvie. Ah, that's the perp, the code rewards total distance closing in AFTER shooting, and the distance is to any non-member (so civvies were the case). That'd explain most of the civvies dying first.

Some general thoughts/notes:
* positive values definitely help agression&map completion by bringing the enemies closer to you, which also helps them explore more of the map. Negative values could be used only in strange cases (hunts) or a simplistic civvie behaviour (I believe civvie AI uses a similar method, not sure since I've only glanced at it 3 days ago). Also this value does influence the hiding behaviour since at high enough values it can overshadow it. I don't believe though we should ever investigate such high values. From this simplistic check I found that the default value looks ok. It's high enough to progressively bring the enemies towards you, but it's not large enough to overshadow shooting stuff or hiding. If someone wants to further experiment with this value, I'd suggest the 20-60 range (at 60 we interfere with hiding a lot) with a focus on the 20-30 range (remember, we're cutting in the agression by preferring a dash instead of a shot). Bear in mind that the large close_in bonus happens AFTER the shot, before that it works as a malus:
Before: max(guete_close_in->integer - move, 0)
After: guete_close_in->integer * (1.0 - minDist / CLOSE_IN_DIST)
Bear in mind that I intend to modify the 2nd part by factoring in the path distance (try to avoid having them stuck behind the hull of ships)

GUETE_KILL
A simple additive score boost that rewards killing a target. Default value of 30, which gets an additional bonus if the target was in reaction-fire mode (GUETE_REACTION_ELIMINATION). The extra bonus should make agression to RF'ers prefferable, but would make this test a lot harder (aliens dying), so I don't put RF on, and as such I take the reaction_elimination out of the picture.
default(30): Using the 4 civvies dead & 1 squad member that seems to be the result of the previous tests
-1000: 3 civilians dead and one injured squaddie. Interesting to mention that the alienn that landed the hit on my squaddie immediatelly switched fire to a harder target. One would expect that there should be 0 deaths with this value (I did), but then I remembered that the dmg score also factors in the hit %. So those deaths were probably the 'lucky' hits.
0: Interesting results. Only 2 civilian kills and one heavily injured soldier. I can't really explain the result, someone care to repeat this test and see what he finds? Supposedly this value would make the aliens ignore whether they'd kill someone or not and make them go for raw dps. Instead, I get minimal dps&deaths. Will have to repeat myself sometime.
1000: 2 civvie deaths and some melee enemies sitting next to my soldiers visible, doing nothing. This gets weirder by the moment. Redid the test, same results. Maybe I need to see this with a cleared head at a later time.

*Bloodspiders/melle heavily influenced by this metric, at least in the negative values where they just freeze up.
*Initially i thought that this could be used in order to make snipers prefer larger distances. Then I realized that this would make them useless at close-range, unless the value is somehow tied to the weapon (so we could theoretically force a weapon change?)
* due to the probabilistic nature of when this metric is applied, it's not that easy to weigh it without repeated tests. I can only go as far as to say that this *should* affect how flims a shot the aliens are allowed to take when a possible kill is involved (might partly explain why aliens somtimes prefer to take shots to civvies further away, despite the civ-target nerf. The other reason ofc being the large base dmg because of the lack of armor).


GUETE_CIV_FACTOR
This is a multiplicative malus that is directly applied to the damage expected, right before it becomes the base score for this action: dmg *= (guete_civ_factor->integer)/100.0
Please note that the original value is a float, but I switched it to an integer and added the "/100" part. I'll still consider -1000 and +1000 (rather than -100 and +100) since it still makes sense to check for extreme ranges first.
default(25): Copy/pasta from previous examples, 4 civvies dead and one squaddie
-1000: 2 civvies dead (??!!) and one squaddie. The 2nd civ death was a collateral from a grenade launcher, the first one I believe was because the civ was in the line of fire. Other than that there were civs sitting right next to the aliens and were completely ignored.
0: 0 civvies and 3 squaddies dead (damn that alien with that plasma grenade launcher). This time no civvies were in the line of fire. Despite the difference of the results, I'd say this was a fluke. Both this and the previous example should be identical, since it's multiplicative and makes the aliens totally avoid targeting civvies
1000: lots of kills, that guy with the bazooka got a monsterkill. Only behavior worth to mention here is that at no occasion would an alien pick try and shoot a squaddie of mine instead of me. There was an occasion where I received direct fire, and that was from an alien that closed-in, and from his hiding spot he could only reach my squaddie.

notes:
*This is a pretty clear-cut case. It handles how willing are the aliens to shoot at civilians. This indirectly affects other behaviour by letting civ-shooting get in the way of doing other things (like shooting your squad).
* I believe the original value of 25 (0.25 in #defs) is too large. I'd reccomend looking to something closer to 0.15, maybe less. Remember, most civ hits are kills, so the dmg (how many hp's do civvies have?) will usually get the kill bonus (I don't know if the auto-RF makes civvies flagged as RF. That'd also add the RF eradication bonus ontop). So, if a civvie has 50 health, that's 80 base score for certain hits, which can easily overshadow aggression towards your team in many occasions.

OVERALL & NOTES:
* From those simple tests, it's obvious that the GUETE values don't just affect dramatically the efficiency&behaviour of the aliens as a combat group, but also the bloodspider/melee as a race/outfit. Values that may look nice for ranged units will probably be not so much for the melee ones, and vice-versa. One more reason for creating individual/race-specific sets of GUETE values (apart from the 'nice feature' aspect of it). I suspect that the various degrees of long-ranged and short-ranged weapon wielders have a similar, yet not so dramatic, effect. Example: there's no reason for a sniper to get any positive bonus from the close_in at the absolute-distance calculation. Maybe we could go as far as to say this should be a malus for him.
* Fine tuning these values to perfection is probably going to have an inverse square relationship of time spent and performance boost. Still, it should be considered an ongoing sport, even past the initial large-gain period. Should be especially honored when something significant changes in the game (new races/weapons and combat mechanics).
* Changing the numeric values the one way examined here. Another way to go about it is to transform the additive effect of most of the to multiplicative. This will create another interesting set of values that can easily be better or worse.
* Changing the base dmg weight to the score (apply some kind of multiplication at the end) can be of equal importance as the GUETE values. This needs to be tested and tuned accordingly (TODO).
* Noticed that aliens will toss the sole (melee) weapon they have. We should get them to avoid doing that in general, and/or take into account that in the next turn they will have to spend TU's to re-equip a weapon. If we force them to make sure they re-equip a weapon at the same turn (or stop considering a weapon if they an't), we don't need to do the avoiding part.


If someone wants to help, I'd appreciate:
*people to repeat those tests and evaluate/comment.
* Maybe some kind of a fixed small/med map where spawnlocations&numbers&equipment are also fixed. That way we can compare the results without needing many repetitions and deviations due to the randomness of the skirmish.

On a semi-relevant notice (aliens-shoot through walls):
"dmg = vis * (fd->damage[0] + fd->spldmg[0]) * fd->shots * shots;". This is the base dmg calculation. See the vis variable? that's a G_ActorVis() returns 0.0-1.0. I have also noticed that the shoot-through-walls happens a lot more often in maps with objects that for some reason you can see through at some point (the huts of africa for example). The huts is also the example where the aliens tend to futily shoot up. Maybe there is a disparity between visibility and shooting lines that confuses the AI and makes it shoot through ubreakable & semi-transparent objects? Glass for example work fine here, but hut walls don't. Maybe it's an issue with the definition of the wall itself? I can't really say much since I have no idea what happens outside of g_ai.c. I could start printing out the vis&dmg values and wait to see the results when a wall-shooting occurs.

Finally, I have to add, that I'll probably need someone to review any/all the code I write. I've been java-spoiled for the past decade or so, and I'm really terribad with having 'free()' in my mind ;) I also have forgotten every single performance coding tip I ever knew.
I'm also terribad with handling svn access. So me playing with commit without someone holding my hand can be a bad idea.
Title: Re: the AI discussion
Post by: shaunc311 on June 30, 2010, 09:23:24 pm
I would love to help you test this out.  It sounds like a lot of fun actually.  Is there a certain build I need to get to start messing around?  Also, is there a way to either save a video or another build that stores the movements of all the players and enemies?  That seems like it would help a lot with testing this.

One question I have about the GUETE_ metrics.  Would it be possible for the aliens to modify them as the game progresses?  I have absolutely no AI programming knowledge but here is what I'm thinking: 

Let's say that the very first mission all the aliens start at the default setting and the game keeps track of all the damage they do to players and civs.  They keep it like that for a few missions to get a good average damage.  Then maybe the game bumps up GUETE_KILL (or another stat) for a few missions and see if the average damage increases.  If so then they change the default to that and start modifying another attribute.  If it decreases then they drop it back down but remembers that the particular combination of values didn't work so it knows not to try again. 
Title: Re: the AI discussion
Post by: ovvldc on June 30, 2010, 10:04:25 pm
I've done some agent-based modelling for my PhD, and I fully agree that finding salient criteria is much more important than the exact computation. Also, heuristics will be less computationally intensive than calculating everything.

All in all, I would suggest making a diagram, even simpler than the one that nonickch did, that lays out the basic line of thinking. What variables comes into when?

For example, I play patiently. When I play, I always think about where my guys end up at the end of the turn, especially if the aliens are far away and not always visible..

So then the first way to evaluate is to award favourable points to any place on the map as an end point.
1. Do I know of any enemy in view (of that alien or a teammate, to include some cooperation)?
2. Can I go somewhere that has views in other directions that I or my teammates currently have? (what places are not in line of sight?)
3. What places (within my TU range) have cover on many sides (NSEW)?

Until an XCom soldier comes into view, you could just have the alien walk via places with new views to places with high cover. Interrupt as soon as an XCom soldier comes into view.

Now if an enemy is in view, you get additional considerations:
4. What places are closer to visible enemies than my current position?
5. Where are the (three?) nearest visible enemies (average into a 'centre of gravity')
6. Where are the (three?) nearest friendlies (average into a 'centre of gravity')
7. What is the main line of fire (vector between centres of gravity), in what places could I get line of sight to an enemy from a different/cross angle? (more teamwork)
8. What places are available where the known enemies cannot see me?
9. What places are available where the known enemies have trouble hitting me?

Now all the places that are interesting to be are already known. Some are safe places and some are not at all safe (if you can see them, they can usually see you). But for attacking, some of the squares with LOS to an enemy need to be evaluated further.
10. What weapons do I have in my hand? Can it/these hit from there?
11. In case I go there and shoot, what is the safest place I can end up in?
12. What other weapons I have in my inventory? Can they hit from there?
13. In case I go there and change weapon and shoot, what is the safest place I can end up in?

Thinning out some more squares in this way comes the attack calculation
14. What damage can I expect from shooting with any of my weapons from that square?

In this system, each place could be evaluated essentially by five variables: VIEW (how much lookout do I have), SAFE (cover/hiding), HIT (can I attack), DAMAGE (how much can I inflict) and TIME (how long will I spend there). You would want to plot a route within the maximum total TIME from where the alien is now to somewhere with high SAFE, while stopping somewhere with high HIT and DAMAGE and shoot, as well as passing through as many places as possible with high VIEW.

The result is like superimposed landscapes with hills and valleys. VIEW landscape needs to be updated whenever your side moves. SAFE and HIT needs to be updated when their side moves or when enemies get discovered during your turn. DAMAGE need to be updated a lot, but only in places where the HIT has a sufficiently high value and that are in walking distance.

Using this kind of thinking you can install cutoffs that keep you from doing all of the calculations for everywhere. Obviously, this was extremely lightweight in the time of UFO:Enemy Unknown when you only had a thousands or so squares. Now the maps are more fine-grained, so some location sampling and interpolation may be needed.

Good luck,
 Oscar
Title: Re: the AI discussion
Post by: nonickch on July 01, 2010, 12:13:51 pm
So then the first way to evaluate is to award favourable points to any place on the map as an end point.
1. Do I know of any enemy in view (of that alien or a teammate, to include some cooperation)?
2. Can I go somewhere that has views in other directions that I or my teammates currently have? (what places are not in line of sight?)
3. What places (within my TU range) have cover on many sides (NSEW)?
Last night I was having a discussion with arakis on irc about the AI and I realized that the AI actually cheats major-style in the visibility department. FighterBestAction checks for visibility on the target before considering to shoot him, which tricked me into thinking that it's all kosher. What I did forget though is that it sets the aliens ent->origin at the spot it's considering. This means that all the aliens get futuro-vision and know what they can see from any spot they can move to before actually going there. I previously assumed the autopsy reports are to blame for behaviour indicative of that (they all say they have perma-IR vision), but it may just turn out to be this one.
This little trick actually makes away with all the visibility considerations that the AI ever needed to ever make. Should we ever decide to drop this AI shortcut, we will need to make up for it with some heavy visibility calculations. This is definitely a way to improve the 'feeling' of the AI that adds a penalty to it's effectiveness lest we do vis checks much better. A pragmatist I am not, this is definitely something I'd like to try out, but when other areas are finished first (everything set @todo already in my posts, quite a few).
For the pragmatists that will definitely ask 'why should we nerf the AI and then augment it again instead of just keeping the cheat in place?', here's a couple of reasons why we should:
* Ever managed to sneak up on an alien? Me neither.
* You can actually employ tactics like distraction much more effectively. Your rabbit down there is getting the alien behind the window on the 1st floor to try and shoot at him rather than checking if someone is sneaking up on him. As it is right now (since I've tried this tactic), the minute your backstabbing guy gets anywhere close to the sniper, the sniper lets the rabbit go. Ofc your tactic still works when the sniper promptly gets stuck behind the corner.
* The aliens will look much more active. They will actively patrol areas (the vis maximization score) trying to find targets.
* I suspect that this will make them <3 their sniping positions. As it is right now, mappers place perched aliens to godly positions, only for the CLOSE_IN metric to kick in and leave their position in turn 1.

All your listed metrics sound quite proper for this task. Say, are you interested in some coding-fu? ;)

Now if an enemy is in view, you get additional considerations:
4. What places are closer to visible enemies than my current position?
5. Where are the (three?) nearest visible enemies (average into a 'centre of gravity')
6. Where are the (three?) nearest friendlies (average into a 'centre of gravity')
7. What is the main line of fire (vector between centres of gravity), in what places could I get line of sight to an enemy from a different/cross angle? (more teamwork)
8. What places are available where the known enemies cannot see me?
9. What places are available where the known enemies have trouble hitting me?
Calculating clusters of enemies/friendlies! Excellent idea!
At least for the enemies, I can see this getting in asap is for the close_in metric. Try and approach the centroids of clusters (mmmm, clustering algo's here I come) instead of individual targets. I'll be doing the changes in the close_in right after the guete's and the wiki, so let's see what we can get here :)
As for clusters/centres of friendlies, hmmm. I don't think we could use this as a bonus metric since that would make aliens slowly pack in tight spots, but as a penalty metric (don't wander off X tiles away from your local group unless you have a good reason), it sounds really nice.
(7) is an interesting point. I was thinking something along this lines when trying to imagine how we could possibly define our&enemy lines. Cluster us, cluster them, sketch line between centroids, draw perpendicular line on either side (length varies with dispersion) and voila! A basic definition of a combat front for either side. Sweet part is that it can work the other way around, so if a chosen tactic requires moving your front, you can just add bonuses to spots that move centroid&dispersion accordingly (that goes to the decomposition to primitives mentioned in the theorycrafting of machine learning).
(8) and (9) are already calculated, well it's more of a gradient of visibility.

Now all the places that are interesting to be are already known. Some are safe places and some are not at all safe (if you can see them, they can usually see you). But for attacking, some of the squares with LOS to an enemy need to be evaluated further.
10. What weapons do I have in my hand? Can it/these hit from there?
11. In case I go there and shoot, what is the safest place I can end up in?
12. What other weapons I have in my inventory? Can they hit from there?
13. In case I go there and change weapon and shoot, what is the safest place I can end up in?
Thinning out some more squares in this way comes the attack calculation
14. What damage can I expect from shooting with any of my weapons from that square?
10&11 is already properly evaluated. So is 12&14, I think, but there are some obvious shortcomings. 13 is implied because of the search tree and the fact that we count the remaining TU's.


In this system, each place could be evaluated essentially by five variables: VIEW (how much lookout do I have), SAFE (cover/hiding), HIT (can I attack), DAMAGE (how much can I inflict) and TIME (how long will I spend there). You would want to plot a route within the maximum total TIME from where the alien is now to somewhere with high SAFE, while stopping somewhere with high HIT and DAMAGE and shoot, as well as passing through as many places as possible with high VIEW.
Barring the VIEW metric (which is totally ignored because of god-o-vision), the rest are the basis of the current AI: SAFE is definitely in there and HIT&DAMAGE are merged into one multiplicative metric (named damage). The plotting of move-shoot-hide is exactly what the search tree is currently doing. With emphasis on the exact move-shoot-hide.

Excellent ideas all around :)
Title: Re: the AI discussion
Post by: Hertzila on July 01, 2010, 01:28:13 pm
10&11 is already properly evaluated. So is 12&14, I think, but there are some obvious shortcomings. 13 is implied because of the search tree and the fact that we count the remaining TU's.

I'm not so sure that 10 is correctly evaluated. In couple of cases there has been a window between me and an alien. For some reason the alien doesn't realise that and tries to shoot through it, even though my Line-Of-Fire line gets red when I try to do so (so obviously the game knows you can't shoot through a window).
Title: Re: the AI discussion
Post by: nonickch on July 01, 2010, 01:52:26 pm
I'm not so sure that 10 is correctly evaluated. In couple of cases there has been a window between me and an alien. For some reason the alien doesn't realise that and tries to shoot through it, even though my Line-Of-Fire line gets red when I try to do so (so obviously the game knows you can't shoot through a window).

You are correct in that aspect. But afaik you can shoot through a window, I've done it multiple times (does the first bullet get wasted on the window?).
The whole shoot/can't shoot thing seems to be the one producing the 'aliens shoot through walls'. I was about to make comments on that, then I realized I have a very fuzzy knowledge of that part of the code.
I'm finishing off the new diagrams, creating the wiki page, have a go at the 'stuck behind hulls' issue and then the wall-shooting and all the los/lof things are next.


EDIT:
Upon remaking the charts, I run again on the AND/OR notice I have about the AI_FighterBestAction. This is the code for awarding being hidden:
if (AI_HideNeeded(ent) || !(G_TestVis(hidingTeam, ent, VT_PERISH | VT_NOFRUSTUM) & VIS_YES)) {
         /* is a hiding spot */
         //bestActionPoints += GUETE_HIDE + (aia->target ? GUETE_CLOSE_IN : 0);
         bestActionPoints += guete_hide->integer + (aia->target ? guete_close_in->integer : 0); //AI TESTING MODIFICATION FROM NONICKCH
      }

What does that tell you? AI_HideNeeded checks for our visibility and compare it to our bravery (brave can still stay exposed).
The Vis functions did get me quite confused. Some new and old functions geting |'d together and a long trail of functions. If I'm not mistaken that whole expr simply returns true if we're visible to any of the hostiles or not.
Doesn't that mean there should definitely be an && there instead of ||? I need someone to take a look at that as if it is as I think, that would have a serious negative impact on the AI and all the behavior caused by GUETE_HIDE is skewed.
SOLUTION: It seems that the AI_HideNeeded is used for moral checking vs enemies in a way that produces chicken-out moves. It is not enough by itself to allow aliens to seriously consider hiding. So all is fine. Dodon's post explaining stuff bleow.

What is the 'old' visibility system that's implied in the G_VIS functions, and why is it still working with both systems?

wiki page is up: http://ufoai.ninex.info/wiki/index.php/Combat_AI
Can someone force the .zargo file in it's place in the page? Original file is on the 1st post of this thread.
Title: Re: the AI discussion
Post by: shaunc311 on July 01, 2010, 04:32:52 pm
Thanks for the cool graphs and wiki page with the summary of the stuff above. 

Looking at them and the g_ai.c file, it looks like the current AI works the same no matter what difficulty level you are playing at.  That will be good to know when I play some harder campaigns later.
Title: Re: the AI discussion
Post by: ovvldc on July 01, 2010, 07:06:58 pm
This little trick actually makes away with all the visibility considerations that the AI ever needed to ever make.

Very unfortunate. Having a tactical (sub)game is so much less enjoyable if you can't do any tactics..

All your listed metrics sound quite proper for this task. Say, are you interested in some coding-fu? ;)

I wrote this all down as an approximation of my own decision-making process and to help the discussion about heuristics to clip the search trees. You'll note that my own emphasis is on moving, not on shooting, because my XCom soldiers carry at most two weapons (short and long range) so that the question of what weapon to use when I get where I want to be is pretty obvious. And since none of my soldiers were ever good enough to hit a cow's rear end at point blank range, I do lots of advancing from cover to cover with squadmates ready to provide suppressive fire.

Other people may have different lines of thinking.

Interested, yes. Flattered, certainly. But I am only a humble scientist and I couldn't do proper coding to save my life. And I haven't the time to learn proper C++ or something like that. I can read C++ equations (I wrote some 2300 lines including extensive comments) but I cannot write code that interacts with anything else. Besides, I'll probably have to learn R or Java for work and Finnish for my girlfriend, and those are enough foreign languages for now ;).

Calculating clusters of enemies/friendlies! Excellent idea!

Why, thank you :). It popped into my head as I was writing the rest. Though I didn't even think about clustering aliens in groups. Then again, if I were an alien stranded on a hostile world and cut off from my hivemind, it would seem like the instinctive thing to do. If you do use it, the choice on whether or not to wander off could also depend on bravery.

It is interesting that you thought of group fronts, whereas I thought of individual flanking actions ;).

Barring the VIEW metric (which is totally ignored because of god-o-vision), the rest are the basis of the current AI: SAFE is definitely in there and HIT&DAMAGE are merged into one multiplicative metric (named damage). The plotting of move-shoot-hide is exactly what the search tree is currently doing. With emphasis on the exact move-shoot-hide.

Excellent ideas all around :)

Thanks again :). My personal instinct would be to hold off the aggregation of SAFE and DAMAGE for as long as possible in the AI code, because they are such different sides to the battlefield coin. And having such different criteria (and other metrics) disaggregated probably lends itself better to discrete heuristic choice mechanisms, refactoring primitives, and caching results (to cut on calculations). But you seem to be a better coder than me, so feel free to tell me that my instincts are off the mark.

Best wishes,
 Oscar

EDIT: Having said that I have no time, I would still read and comment on this stuff with pleasure ;).
Title: Re: the AI discussion
Post by: Hertzila on July 01, 2010, 11:37:50 pm
You are correct in that aspect. But afaik you can shoot through a window, I've done it multiple times (does the first bullet get wasted on the window?).
The whole shoot/can't shoot thing seems to be the one producing the 'aliens shoot through walls'. I was about to make comments on that, then I realized I have a very fuzzy knowledge of that part of the code.
I'm finishing off the new diagrams, creating the wiki page, have a go at the 'stuck behind hulls' issue and then the wall-shooting and all the los/lof things are next.

I don't think you can shoot through windows that have actual glass in them (as opposed to empty space as they usually have). UFO:AI doesn't support destructible stuff in maps and I've understood that this extends to glass too.
I haven't looked at the code and I think would have been resolved very quickly if this is the case but just to be sure I think you should check that the game does a line-of-fire check instead of a line-of-sight when deciding whether to shoot. AFAIK a mix between these two would explain it. (This is coming from a person who doesn't understand practically anything from code so take it for what it's worth.)
Title: Re: the AI discussion
Post by: dodon on July 02, 2010, 12:13:26 am
if (AI_HideNeeded(ent) || !(G_TestVis(hidingTeam, ent, VT_PERISH | VT_NOFRUSTUM) & VIS_YES)) {
         /* is a hiding spot */
         //bestActionPoints += GUETE_HIDE + (aia->target ? GUETE_CLOSE_IN : 0);
         bestActionPoints += guete_hide->integer + (aia->target ? guete_close_in->integer : 0); //AI TESTING MODIFICATION FROM NONICKCH
      }

I dug through the archive and i figured out whats wrong.


first occurence is ok
Code: [Select]
/* hide */
if (!(G_TestVis(-ent->team, ent, VT_PERISH | VT_NOFRUSTOM) & VIS_YES)) {
/* is a hiding spot */
guete += GUETE_HIDE + (aia->target ? GUETE_CLOSE_IN : 0);
} else if (aia->target && tu >= 2) {


then a todo was added:
Code: [Select]
/* hide */
/** @todo Only hide if the visible actors have long range weapons in their hands
* otherwise make it depended on the mood (or some skill) of the alien whether
* it tries to attack by trying to get as close as possible or to try to hide */
if (!(G_TestVis(-ent->team, ent, VT_PERISH | VT_NOFRUSTUM) & VIS_YES)) {
/* is a hiding spot */
bestActionPoints += GUETE_HIDE + (aia->target ? GUETE_CLOSE_IN : 0);
/** @todo What is this 2? */
} else if (aia->target && tu >= 2) {


the next step is an implementon of the todo:
Code: [Select]
/* hide */
/** @todo Only hide if the visible actors have long range weapons in their hands
* otherwise make it depended on the mood (or some skill) of the alien whether
* it tries to attack by trying to get as close as possible or to try to hide */
if (!(G_TestVis(-ent->team, ent, VT_PERISH | VT_NOFRUSTUM) & VIS_YES) || AI_NoHideNeeded(ent)) {
/* is a hiding spot */
bestActionPoints += GUETE_HIDE + (aia->target ? GUETE_CLOSE_IN : 0);
} else if (aia->target && tu >= TU_MOVE_STRAIGHT) {
this code is ok,


but if you look at the AI_NoHideNeeded
Code: [Select]
/**
 * @param ent The alien edict that checks whether it should hide
 * @return true if hide is needed or false if the alien thinks that it is not needed
 */
static qboolean AI_NoHideNeeded (edict_t *ent)
you realize that the description of the @return and the function name don't match. So AI_NoHideNeeded changed to
Code: [Select]
/**
 * @brief Checks whether the given alien should try to hide because there are enemies close
 * enough to shoot the alien.
 * @param ent The alien edict that should (maybe) hide
 * @return @c true if hide is needed or @c false if the alien thinks that it is not needed
 */
static qboolean AI_HideNeeded (edict_t *ent)
and our hiding code now is:
Code: [Select]
/* hide */
if (AI_HideNeeded(ent) || !(G_TestVis(-ent->team, ent, VT_PERISH | VT_NOFRUSTUM) & VIS_YES)) {
/* is a hiding spot */
bestActionPoints += GUETE_HIDE + (aia->target ? GUETE_CLOSE_IN : 0);
} else if (aia->target && tu >= TU_MOVE_STRAIGHT) {
and that is error, the name of the function in this context was right. So now there is a "!" missing. The code realy should look like:
Code: [Select]
/* hide */
if (!AI_HideNeeded(ent) || !(G_TestVis(-ent->team, ent, VT_PERISH | VT_NOFRUSTUM) & VIS_YES)) {
/* is a hiding spot */
bestActionPoints += GUETE_HIDE + (aia->target ? GUETE_CLOSE_IN : 0);
} else if (aia->target && tu >= TU_MOVE_STRAIGHT) {


the last change has no problems so the code should be
Code: [Select]
/* hide */
if (!AI_HideNeeded(ent) || !(G_TestVis(hidingTeam, ent, VT_PERISH | VT_NOFRUSTUM) & VIS_YES)) {
/* is a hiding spot */
bestActionPoints += GUETE_HIDE + (aia->target ? GUETE_CLOSE_IN : 0);
} else if (aia->target && tu >= TU_MOVE_STRAIGHT) {

---------------------------------------------------------------------------------------------------------------------------


Now let's take a closer look at AI_HideNeeded
Code: [Select]
/**
 * @brief Checks whether the given alien should try to hide because there are enemies close
 * enough to shoot the alien.
 * @param ent The alien edict that should (maybe) hide
 * @return @c true if hide is needed or @c false if the alien thinks that it is not needed
 */
static qboolean AI_HideNeeded (edict_t *ent)
{
/* only brave aliens are trying to stay on the field if no dangerous actor is visible */
if (ent->morale > mor_brave->integer) {
edict_t *from = NULL;
/* test if check is visible */
while ((from = G_EdictsGetNextLivingActor(from))) {
if (from->team == ent->team)
continue;

if (G_IsCivilian(from))
continue;

if (G_IsVisibleForTeam(from, ent->team)) {
const invList_t *invlist = RIGHT(from);
const fireDef_t *fd = NULL;
if (invlist && invlist->item.t) {
fd = FIRESH_FiredefForWeapon(&invlist->item);
} else {
invlist = LEFT(from);
if (invlist && invlist->item.t)
fd = FIRESH_FiredefForWeapon(&invlist->item);
}
/* search the (visible) inventory (by just checking the weapon in the hands of the enemy */
if (fd != NULL && fd->range * fd->range >= VectorDistSqr(ent->origin, from->origin)) {
const int damage = max(0, fd->damage[0] + (fd->damage[1] * crand()));
if (damage >= ent->HP / 3) {
const int hidingTeam = AI_GetHidingTeam(ent);
/* now check whether this enemy is visible for this alien */
if (G_Vis(hidingTeam, ent, from, VT_NOFRUSTUM))
return qtrue;
}
}
}
}
}
return qfalse;
}


If I am right only brave aliens (when they are threatened by enemy fire) ever feel the urge to hide.
It looks like the code is stuck somewhere between the implementation of AI_HideNeeded and AI_NoHideNeeded.


This code should work fine.
Code: [Select]
/**
 * @brief Checks whether the given alien should try to hide because there are enemies close
 * enough to shoot the alien.
 * @param ent The alien edict that should (maybe) hide
 * @return @c true if hide is needed or @c false if the alien thinks that it is not needed
 */
static qboolean AI_HideNeeded (edict_t *ent)
{
/* only brave aliens are trying to stay on the field if no dangerous actor is visible */
if (ent->morale > mor_brave->integer) {
edict_t *from = NULL;
/* test if check is visible */
while ((from = G_EdictsGetNextLivingActor(from))) {
if (from->team == ent->team)
continue;

if (G_IsCivilian(from))
continue;

if (G_IsVisibleForTeam(from, ent->team)) {
const invList_t *invlist = RIGHT(from);
const fireDef_t *fd = NULL;
if (invlist && invlist->item.t) {
fd = FIRESH_FiredefForWeapon(&invlist->item);
} else {
invlist = LEFT(from);
if (invlist && invlist->item.t)
fd = FIRESH_FiredefForWeapon(&invlist->item);
}
/* search the (visible) inventory (by just checking the weapon in the hands of the enemy */
if (fd != NULL && fd->range * fd->range >= VectorDistSqr(ent->origin, from->origin)) {
const int damage = max(0, fd->damage[0] + (fd->damage[1] * crand()));
if (damage >= ent->HP / 3) {
const int hidingTeam = AI_GetHidingTeam(ent);
/* now check whether this enemy is visible for this alien */
if (G_Vis(hidingTeam, ent, from, VT_NOFRUSTUM))
return qtrue;
}
}
}
}
return qfalse;
}
return qtrue;
}

What do you think?
It's quite late and I hope there are not too many errors in this post.
Title: Re: the AI discussion
Post by: Mattn on July 02, 2010, 09:56:58 am
attached the above change as a diff
Title: Re: the AI discussion
Post by: Mattn on July 02, 2010, 09:59:41 am
I don't think you can shoot through windows that have actual glass in them (as opposed to empty space as they usually have). UFO:AI doesn't support destructible stuff in maps and I've understood that this extends to glass too.

windows can be destroyed - but only if the mapper wanted them to be destroyable - see http://ufoai.ninex.info/wiki/index.php/Mapping/Entities/func_breakable
Title: Re: the AI discussion
Post by: Mattn on July 02, 2010, 10:24:51 am
@dodon - thanks, applied to trunk http://sourceforge.net/apps/trac/ufoai/changeset/30780
Title: Re: the AI discussion
Post by: nonickch on July 02, 2010, 01:36:05 pm
coding high-5 dodon  ;D

So, now: we give the hiding bonus to spots that don't cause our aliens to shit_pants (according to the morale/brave comparison) and spots that have caused our actor to dissapear from enemy view. Try out a standoff landing, the results will be quite different than what you expected.

I tried a couple of skirmishes and oh boy does this slight change totally change the alien behavior. You guys need to check for yourselves, they're pretty much all over the place not always hiding (even with the original 60 GUETE value).
If this 'error'/'new behavior'  was introduced into a patch, how come nobody went 'omg aliens are cowards now'?
Somehow the aliens think that hiding right behind a soldiers back (after stabbing him) is a good place to hide (when they actually dissapear it's ok, they also do it when they don't). Todo I guess.
Getting stabbed is now quite popular, aliens will not cower in a bungalow close to you never to come out again.

We should remember that whatever makes sense to us and looks logical in code does not directly lead to better alien behavior in the game. That's why we need of testing with even slight changes to see if we turned them into zombies.

I'm trying to figure out a way to visualise the search tree to a developer. Is it possible to:
a) Draw those 'move-to' lines (that appear if I click a soldier of mine to move during the aliens turn)
b) Freeze the execution of the ai until some external event somehow arrives from the console/client (I can smell the mutex&threaded client/ServerFrame being thrown here, is there something simpler?)

This way you can examine some debugging output as the alien examines each tile separately (and visualized by the move-to lines). Type something in the console to consider the next tile.
Once we start using performance cutoffs this will be sorely needed since we will actually be able to see them kick in without going through thousands of debug lines.
I was trying to switch the close-in bonus to pathing last night, not much of a success (working on it). This is to fix the hull-hugging behavior.
While fixing that, I spotted another quite influencing parameter for the AI: #def CLOSE_IN_DIST 1200. Now this one was very subtle and was overshadowed my the puzzling OR in the hiding calcs.
As it stands, this 1200 value is the vector distance at which the aliens will start gravitating towards your soldiers. As long as you're further away from that (to be precise: further away from a tile an alien is considering to move to), the CLOSE_IN bonus is effectively nullified. While trying out the pathing distance metric, I'll be using something similar, like 50 TU's. That'd make an alien start gravitating towards you when he can get next to you within the next ~2.7 moves.
Ahhh, that's why my tests failed and the aliens looked stupid. They needed a larger value to get out of the ship. This complicates things, as the needed 80+ values to get aliens out of the ship are very large when no obstacles are involved (they can gravitate from other side of the map). I'll also be testing a weighted average of both vector and pathing to see if it's better.
Hey, can I somehow remove the fog of war?
Title: Re: the AI discussion
Post by: shaunc311 on July 02, 2010, 03:37:16 pm
Hey, can I somehow remove the fog of war?

I'm at work right now so I can't play to test it, but I think this:

#define G_IsVisibleForTeam(ent, team) ((ent)->visflags & G_TeamToVisMask(team))

line in g_local.h (line 187) can be changed to true.
Title: Re: the AI discussion
Post by: nonickch on July 02, 2010, 03:50:21 pm
That would eat into the AI calculations. You could check and see if it's the player teams' vision, but then again the AI does exactly the same calc to see if it's being seen.
If there's no standard way to do that, no biggie, I can just kill the AI and walk around instead.
Title: Re: the AI discussion
Post by: shaunc311 on July 02, 2010, 04:01:23 pm
I'm not any type of authority :)  I just saw that while browsing through some of the files and figured I'd try and help.
Title: Re: the AI discussion
Post by: dodon on July 02, 2010, 09:31:15 pm
If this 'error'/'new behavior'  was introduced into a patch, how come nobody went 'omg aliens are cowards now'?
The "hiding is good" part was always there. And the second part was added to make the aliens more visible.


Getting stabbed is now quite popular, aliens will not cower in a bungalow close to you never to come out again.
As the new behaviour needs high morale the aliens will hide again, when the casualties rise.


attached the above change as a diff
Ugh, missed that step yesterday  :(
Title: Re: the AI discussion
Post by: Mattn on July 02, 2010, 09:52:22 pm
console command "sv debug_showall"
Title: Re: the AI discussion
Post by: nonickch on July 02, 2010, 10:07:44 pm
Right.

Regarding dodon's patch, I've had conflicting reports from 2 people on irc.
One says it's much more challenging now, the other one says they're just plain suicidal/silly. Probably need to finetune values there and augment HideNeeded because it only checks for threat from single individuals and doesn't care if it's surrounded by 1000 lemmings (only cares if one of them can 1-shot 1/3 of it's hp). Also checks for visibility ofr that specific actor instead of shoot-ability, so I guess that's why the AI now hides behind your back more than often.

Found a new bug today while spamming debug messages around the code:
in AI_FighterCalcBestAction, there are cases where there is a target added to the action and no maxDmg set:
Code: [Select]
if (aia->target) {
bestActionPoints += maxDmg;
        assert(maxDmg>0);
assert(bestTime > 0);
tu -= bestTime;
}
The assert(maxDmg>0) is there to prove the point.
Perpetrator is the ungodly large set of nested statements before it (we should really break em down asap) that calculate damage:
Code: [Select]
maxDmg = 0.0;
for (shootType = ST_RIGHT; shootType < ST_NUM_SHOOT_TYPES; shootType++) {
See the maxDmg being reset? That shouldn't be there as it holds the current max dmg. I commented that out.
Also, dmg may end up having a value when a target is not even remotely shootable, that's because of the continue;'s, so the fix I got is:
Code: [Select]
for (fdIdx = 0; fdIdx < item->m->numFiredefs[fdArray->weapFdsIdx]; fdIdx++) {
   dmg=0.0;

Here's the .diff (http://www.mediafire.com/download.php?zrr2x4yyeoj)
Can someone refactor the entire set of firing calculations into nice little functions? It's kinda chaotic to read and bug such as this can creep up. I tried to, but I still don't feel comfortable enough with C code.
It'd be nice to especially have the calculations for the base damage as one function, as I can use this in the AI_HideNeeded in order for the aliens to properly evaluate the threat level (don't run headlong into my machine-gunner you idiotic tammie).

console command "sv debug_showall"
Many thnx for that one


Dammit, maybe I'll finally manage to get to the part where I can test the pathing close_in metric tommorow.
Title: Re: the AI discussion
Post by: nonickch on July 03, 2010, 03:43:29 pm
Right, todays work ended up being the AI_HideNeeded while trying to tone down suicidal tendencies introduced from fixing the hiding checks.
There's quite a few things wrong with this function:

a) Checking if we take too much damage is very poor. Only considers one shot (probably the simpled firetype) from an enemy and doesn't add up the damage when many enemies can see you (death by pistol firing squad, dmg expected from a flamer is 7!!). I kinda fixed the many-enemies issue in my working copy (by just adding up and having a % of HP threshold) but then I found the rest of the bugs:
b) Before we consider the damage we take from an enemy, we first check if our team can see him: "if (G_IsVisibleForTeam(from, ent->team)) {". BUT, this function doesn't take into account that we're actually considering going places where we still have no vision on. So aliens tend up to rush to a firing squad as long as they had no vision on them. Testing fix of thinking we're always visible still stumbles upon:
c) Damage expectation from:  a rocketlauncher is always 0, 0-range shotgun is 11, grenade launcher 0. This essentially turns AI_HideNeeded into just a morale check as I have never seen it evaluate true because of the damage. This is where I'd need the function-ized damage calculations mentioned in the previous post.

EDIT: I'm rewriting the damage calcs and HideNeeded atm. No need for someone else to go at it at the same time
The current state is this: The basic damage calculations are so simplistic you can say they are totally wrong. Attempts to use the core server shooting algorithm is not going anywhere since the G_IsVisibleForTeam(ent) is used all over the place, and this function does not take into account that we are considering an alternate positioning of our alien.
Anyone got ideas how to fix this without messing up the server code? If not, I guess I'll have to start ripping large portions of the G_* code :(
Title: Re: the AI discussion
Post by: Valaska on July 06, 2010, 01:23:38 am
 I dunno, the AI needs more aggression. Some maps, it feels like some wild goose hunt and gets extremely boring and randomly they will get a lucky kill in and I restart. Then there's maps where they are just stunned and its straight forward and easy.

 Then there's maps.. Riverside Town. It's virtually impossible because you have to go to the aliens.. These are apparently some super aggressive, dominating force.. They already have massive technology over you and equipment, even more health better reaction fire than you can even dream of getting etc..

 They need to push, come to you, be aggressive or do SOMETHING other than ducking in and out of cover, it's sickening especially now reaction fire is effed up beyond all recognition. They broke the game for me when they broke reaction fire.. Sure you can use the exact same weapons as everyone else does now adays and play the game like some breeze fest but come on I want a challenge or something different, surprises etc.
Title: Re: the AI discussion
Post by: nonickch on July 06, 2010, 03:15:13 am
GOOD NEWS EVERYBODY
(http://www.clixbagoftricks.com/wp-content/uploads/2009/08/farnsworth.jpg)
Futurama is back!
Errr, I mean I solved many AI shooting-related problems. I think... eh, what? Oooh, but I do have my jammies on.

On related news the damage calculations have been fixed & the AI_HideNeeded was brought in line with the dodon patch (no more suicide, mkay?).
Kinda... I used proof-of-concept sampling of real shots via mock calls to G_ClientShoot, and, um, I broke my CPU (a turn takes 15 mins atm and crashes are frequent). I didn't check memory usage, but I expect that to skyrocket aswell.
Remember the fan-out term and tree size theorycrafting? Good, here's how it all ties into this post, along with the performance cost estimations & the necessary optimizations. We consider each function separately:

GAH, upon reviewing, my complexity calculations are WAY off. Too bothered to fix tm

AI_FighterCalcBestAction
All damage considerations for a given target are now moved to AI_GetDamagePotential. All notes on AI_GetDamagePotential are in reference to the code that existed i AI_FighterCalcBestAction
AI_GetDamagePotential now drops visibility checks against the target because:
* (fix) Visibility checks were counter-productive. G_VisTeam was used which incorrectly (yet efficiently) checks the vis flags. BUT, a few lines above we moved our critter to a new position which makes those flags obsolete (and G_Vis a terrible liar).
* (fix) We don't need to see someone if we yield an indirect-fire weapon like a grenade/grenadelauncher.
This removes the visibility cut-off and produces a possible fanout between 0(seeing all targets) to enemies+civilians(16?). So, worst case scenario is 16x more leafs.
Obvious performance optimizations are to re-introduce visibility checks for non-indirect weapons.

AI_GetDamagePotential now has all it's damage calculations replaced with the avg of 10 G_ClientShoot mock-up calls. This means that the damage expectation we have is the best there can be (even better from the UI estimations imho).
* (fix) Aliens properly pick shoot types and shooting spots. The difference in their dmg output is quite dramatic.
* (fix) Aliens no longer try and shoot through walls. LOF is properly calculated in G_ClientShoot.
* (fix) Aliens now know the destructive power of grenades & launchers. I did not test this, but it is most probable that they will strongly prefer crowded places for their splash weapons.
* (fix) Aliens properly evaluate friendly fire/self-harm chances (they currently subtract FF damage from damage to enemies in order to get to a final expected dmg).
The penalty we take on performance is quite significant. The fan-out may be only 10 (ontop of the previous one, for a total 160x number of leaves compared to 2.3 AI), but the tracing calculations needed here are a heavy factor.
Performance optimizations needed: An obvious proper bridge to the G_ClientShoot (no reason to try 10x, instead it can give us a base dmg+deviation), OR a complete copy of the damage calculations inside the AI code (bad, code replication&heavy maintenance). Best solution is a mathematical equation that can give us comparable results with minimal CPU.
For people wanting to try and extract the math equations, have a look at g_combat.c. Entry point is G_ClientShoot. HELP APPRECIATED, I SUCK AT MATH

AI_HideNeeded
This function became an issue as soon as the dodon patch fixed a hiding issue which moved quite some weight onto this function. Right, over here we need to perform a basic morale check (if morale < brave, then we need to hide), but most of all we evaluate what kind of damage we can expect in the next round (given the tile we stand on). The old style was very simplistic and quite mistaken in the damage expectations. So, we change these with a call to our new AI_GetDamagePotential for every enemy soldier out there. Simple isn't it? That should fix:
* (fix) Aliens now exactly know how much damage they can take the next round (not taking enemies moving into consideration). This means that they will not wander in range of large guns for no reason (reason enough is ofc slamming a grenade in your cluster of soldiers ;))
Now, this function is what cause the MAJOR fan-out disaster of doom. If you exclude the changes in this function, the AI is fast enough to play, but here we fan-out again, ontop of our 160x worst case scenario. For every of those leaves we check the damage possible for every enemy. Yes, that's another set of the same fan-outs we had up to this point (we just have num_enemies, not num_enemies+civilians). So that's a worst-case scenario of 8, bringing our fanout up to 8x10=80. Since this is in sequence with the damage calculations, this is multiplicative bringing the total amount of leaves to 160x80=12.800 for a worst case scenario. Yes, this is the point where the CPU just shrieks and tries to liftoff. And don't forget, the shooting calculations are a whole lot heavier than any other calculation we ever do.
Obviously, this would need to be another place were we need to improve quite a bit if we still intend to perform full-tree non-heuristic searches. Only checking only vs enemies that have a direct line of sight would be a first step (but that would make us oblivious to indirect-fire nades/launchers). Other than that, I can't see many ways to keep this functionality and still preserve a full-tree searching. The performance of this function is mainly influenced from the AI_GetDamagePotential function, so any performance gain (in fan-outs) there affects this function in a sqrt kinda of way (8x8 vs (8-Y)x(8-Y)).

So there it is, a nice step forward. Woah, the aliens in the background game are giving me a REALLY hard time :}
You may wanna test this out, if only for a turn or two to get a general idea of what's really changing. You may also want to skip the changes that happen in the AI_HideNeeded  function if you only want to see the AI gains in the damage output department.
Oh and I did change the g_ai.h #def GUETE_CIV_FACTOR to 0.20 (from 0.25) because it seemed right (I liked 0.15, but babysteps...). I also created a "#def GUETE_DAMAGE_AVOID 0.5" which is the % of current HP that an alien considers 'acceptable losses' (if not, this tile does not get the GUETE_HIDE bonus). Obvious possible change to this is to flip it from a binary value (run | not_run) to a gradient of score penalty.
Here's the svn diff (http://pastebin.com/kgZ9bDHP) vs the latest-ish trunk. (DO NOT GET COMMIT-HAPPY WITH THIS)

P.S: Sweet jesus, my squad got mauled in the last turn, and I did position them nicely...
Title: Re: the AI discussion
Post by: Mattn on July 06, 2010, 09:13:31 am
i've updated the patch a little bit and fixed some compilation errors as well as applied coding guidelines

http://pastebin.com/63PBDkyn

Title: Re: the AI discussion
Post by: MCR on July 06, 2010, 12:30:24 pm
Hey Nonickch !

Great to see someone taking on the fight with the AI & already having some major successes !
Gratulations !!! (although I was not able to test latest improvements yet)

Ofc. I would not be MCR, if I wouldn't have some opinions & suggestions regarding this subject also ;)

So here they come:

1.) The idea to have a different AI for the specific races, which is ideally not hardcoded, but scriptable via .ufo scripts is something I strongly support & something I would really wish to see in some future version of the AI.

2.) Please do not forget that we already have some excellent & detailed descriptions of the various alien races, their capablities, their behaviour &weaknesses & following those descriptions would automatically mean having a substancially different AI behaviour for each race as already suggested in this thread (YEAH, TOP IDEA - A MUST-HAVE FEATURE FOR THIS GAME !!!)  8)

Here some behaviour I would suggest:

Bloodspider:
Robot which should never hide & never panic, but strictly use all the time-units for approaching & killing the next enemy target. The Bloodspider should maybe strictly attack the enemy & ignore civilians, which would mean that eventual bloodspiders on the battlefield would be the first to attack our landing crew & would be a threat that would have to be dealt with as quickly as possible to survive...
The bloodspider's AI would not need much CPU-Power ;)

Taman:
Those are described as being the most intelligent species, which also leads the other races when on the battlefield (group tactics ?).
They should naturally get the most advanced AI, but to avoid making them over-powerful as they will get 'Psionics' in later stages also, I would suggest massive cuts in their god-o-vision ;)

Ortnok:
I would like to see those heavy, tough Ortnoks behave as brutal as possible, but I also would like to see them using dumb tactics & brute force by simply attacking the nearest target without making any damage calculation or without consideration if the next target is a civilian or enemy ;)
I do not know, if I want to see the Ortnoks take cover, maybe we should make them just a little bit tougher, but more self-confident & straight-forward ?! I also do not know if Ortnoks should ever panic ?!

Shevaar:
This is the race with the best (infrared) visual abilities (they could imho keep their god-o-vision), they are described as moving fast (we should reflect this somehow visually on the battlescape also) & as being quite intelligent & tough, but still those are the ones taking orders from the Taman, when fighting together on the battlefield...

Here the link to the descriptions of the various races for your inspiration:

http://ufoai.ninex.info/wiki/index.php/Aliens

Thanx again to all folks making our aliens behave wiser & even more challenging ;), especially to nonickch or his late-night-twin nonickch-drinkin for investing all this brain- & workforce here !!! GREAT !!!
Title: Re: the AI discussion
Post by: shaunc311 on July 06, 2010, 04:42:06 pm
Just as a preface, I'm not that great at C, but it looks like if G_ClientShoot returns false, you don't have to call it 9 more times.  The only time it returns false is when no shot is generated.  If it wasn't a mock shot then it could return false if the reaction fire kills you, but since this is mock, that isn't going to execute.  I doubt that saves much CPU though since all the hard math comes after those checks. 

You might want to experiment with breaking out of the if(shots) if it returns false since 0 shots should land.  That might not be valid though since the AI actor could run out of ammo.

I hope that helps a little.  I tried building trunk last night and it hung on the loading screen.  Going to try again tonight.
Title: Re: the AI discussion
Post by: nonickch on July 06, 2010, 05:51:20 pm
mattn's edition on the original diff includes the removal of a NULL player_t * being passed into G_ClientShoot. This would happen when HideNeeded requested GetDamagePotential. The null was my solution to G_ClientShoot returning false because G_ActionCheck reports 'this is not your turn'. This is logical ofc because we are mock-shooting for the humans in the aliens' turn.
ActionCheck though returns default true if "!player". We also need to skip ActionCheck because the TU's left on the human soldiers is next to 0 (we simulate the shots by checking vs 30 TU's worth of shots with the appropriate GetDamagePotential param).

Now, I found what caused all of my crashes, it's a visibility check that uses the (NULL) player_t when we are considering the humans tossing grenades. I took the liberty of enclosing that check in a "if (!mock)" check that seems to be covering all other visChecks in the code. If that's not viable for some reason, do tell me.
Now, for optimization in search trees, the higher-up you optimize (see: prune) the tree, the greater the overall gain. I did notice that the AI_Run is called on ServerFrame events, and only 1/10 of them is utilized. ActorThink is called way more than 8 times in the aliens turn, so that's the first place to start.
What are serverframes, when is AI_Run really called and why do we have to call ActorThink more than once for any alien is the issues I will solve... tomorrow, because today nonickch is hitting the bars.

Shaunc311: Yes that's one of the things we should see more into. I do indeed depend on the fact that ClientShoot will quickly return false. I will debug a few to see if there is any chance of G_ClientShoot returning anything but false after returning it one (you should be right with this one, but I just wanna make sure).
Update, the returning qfalse is "typedef enum {qfalse, qtrue} qboolean;", so we can't really make between 0-dmg because of a miss and qfalse because we can't take the shot.

MCR: I do love the customized alien behaviour aswell. Up to now I've seen 2 possible ways of doing that:
a) Behaviour: Move/tie GUETE values (or some of them) to a race/weapon-outfit. That can parameterize the aliens. If it needs to be LUA, I guess we could make LUA alter the GUETE's, then call AI_ActorThink. But aren't the alien stats scriptable?
b) Intelligence: If we ever incompletely search trees, the ordering of the candidate nodes decides the apparent 'intelligence'. Divert from best-to-worst to dumb-down.
I don't believe we're currently in any position to consider the extent of infra-vision. Maybe if we play with the CLOSE_IN metric... not sure. But in overall, aliens have the same amount of god-o-vision because we really have 0 visibility considerations in the AI (The last few we had I snuffed because of the vis flags). Once this cheating AI is top notch, then we can go from there.

For anyone thinking on running tests like these: Get CDT-Eclipse or go mental with any of your debuggers. I was slowly inching through the code untill I got my trusty eclipse up and everything sped along nicely. dcom_printf's are NOT to be trusted when you spam 100's of em.
Title: Re: the AI discussion
Post by: shaunc311 on July 07, 2010, 02:50:59 pm
I did notice that the AI_Run is called on ServerFrame events, and only 1/10 of them is utilized. ActorThink is called way more than 8 times in the aliens turn, so that's the first place to start.
What are serverframes, when is AI_Run really called and why do we have to call ActorThink more than once for any alien is the issues I will solve...

Just to save you a few minutes, I threw in some breakpoints and this is what I found:

ServerFrame is called every second no matter if it is the alien, civilian, or players turn.  90% of the time, it just returns (as you already noted).    The other 10%, it figures out if it's an AI players turn to move and then moves 1 player and then returns.  So one AI player every 10 seconds.

From what I could tell, ActorThink is called for every alien and every civilian.  That may be why you are getting more than 8 calls per turn.  The good news is that with the new super smart aliens, the civilians die in the first turn so there are less of them to calculate.

I was getting a lot of assertion errors in dbuffer.c so I could never finish an alien turn completely so this might not be 100% accurate.  When I get home I'll update from svn and see if that issue has been resolved.
Title: Re: the AI discussion
Post by: dodon on July 07, 2010, 03:20:23 pm
I was getting a lot of assertion errors in dbuffer.c
I investigated this. Have no time now, will post later.


When I get home I'll update from svn and see if that issue has been resolved.
not likely

EDIT:
Assertions in dbuffer.c (http://ufoai.ninex.info/forum/index.php?topic=5105.0)
Title: Re: the AI discussion
Post by: shaunc311 on July 07, 2010, 07:07:55 pm


AI_GetDamagePotential
...

The penalty we take on performance is quite significant. The fan-out may be only 10 (ontop of the previous one, for a total 160x number of leaves compared to 2.3 AI), but the tracing calculations needed here are a heavy factor.
Performance optimizations needed: An obvious proper bridge to the G_ClientShoot (no reason to try 10x, instead it can give us a base dmg+deviation), OR a complete copy of the damage calculations inside the AI code (bad, code replication&heavy maintenance). Best solution is a mathematical equation that can give us comparable results with minimal CPU.
For people wanting to try and extract the math equations, have a look at g_combat.c. Entry point is G_ClientShoot. HELP APPRECIATED, I SUCK AT MATH

Been looking at this at work today.  I have a couple ideas:

1.  Instead of 10x, what if we set up a delta value.  We run it once to get a base value and if the 2nd run is within the deltaof the first value we just return the average of the two.  If it isn't, we keep running up to 10x until the average is within the delta. 

For example:
delta = 10
run 1: damage = 100
run 2: damage = 94 (avg 97)
returns 97

or
run 1: damage = 100
run 2: damage = 25 (avg 62.5)
run 3: damage = 44 (avg 56.3)
return 56.3

2.  Personally, I like the project management time estimation formula ( (worst case + best case + 4x most likely) /6 ) So we'd run it at least three times.  Take the best, worst, and average.  Then put it through the formula.  The 2nd example above would return 58.3.

AI_HideNeeded
...
Now, this function is what cause the MAJOR fan-out disaster of doom. If you exclude the changes in this function, the AI is fast enough to play, but here we fan-out again, ontop of our 160x worst case scenario. For every of those leaves we check the damage possible for every enemy. Yes, that's another set of the same fan-outs we had up to this point (we just have num_enemies, not num_enemies+civilians). So that's a worst-case scenario of 8, bringing our fanout up to 8x10=80. Since this is in sequence with the damage calculations, this is multiplicative bringing the total amount of leaves to 160x80=12.800 for a worst case scenario. Yes, this is the point where the CPU just shrieks and tries to liftoff. And don't forget, the shooting calculations are a whole lot heavier than any other calculation we ever do.
Obviously, this would need to be another place were we need to improve quite a bit if we still intend to perform full-tree non-heuristic searches. Only checking only vs enemies that have a direct line of sight would be a first step (but that would make us oblivious to indirect-fire nades/launchers). Other than that, I can't see many ways to keep this functionality and still preserve a full-tree searching. The performance of this function is mainly influenced from the AI_GetDamagePotential function, so any performance gain (in fan-outs) there affects this function in a sqrt kinda of way (8x8 vs (8-Y)x(8-Y)).

And a couple for this as well:

1.  Still go through every actor, but only do the calculation if the actor is visible or has a grenade/launcher weapon.

2.  Maybe if one of the 10 calls to G_ClientShoot returned a value higher than the  HP * GUETE_DAMAGE_AVOID_FACTOR, that would be enough for them to hide.  So stop looping if that happens and just make them hide. 
Title: Re: the AI discussion
Post by: nonickch on July 08, 2010, 01:52:04 am
Shawnc311:
I don't use the 10 shots to get an avg damage, that is going to be pretty stable because the rand variations on damage are pretty miniscule. I run it so it can take into consideration a good amount of the hit probability. This is especially true with low-accuracy explosive weapons. So essentially I'm taking back the old dmg*hit-% metric. The more I think about it, the more I'm convinced we just have to duplicate the hit mechanics employed in the G_Shot functions:
a)Calculate the shooting origin of the weapon
b)Calculate a line that intersects the point we want to shoot
c)Apply rand&accuracy deviations on each axis with some gaussian rands.
d) Trace the modified line and find the first object that it intersects.
That wouldn't be that difficult to replicate, and the damage calculations are even easier. The real problem is the explosive weapons.
Mattn/other people that work on g_combat.c: What's the real use of the mock shots? Can we ninja-in the mock structure (and the shooting code) values that would stop us from having to sample shots?
It's not so much as the base dmg (we could easily estimate that) nor the % to hit (we could use the UI estimation function). It's the LOF issue which we cannot substitute with LOS. And remember, we don't deal with only one LOF, we deal with a 'window' to the target, since we don't always hit spot on and that's where the partial cover really comes into play (I guess we could skip the window calculations in favor of speed without sacrificing a lot).

1. Filtering by vis before taking shots: That's a very logical thing to do, and doesn't seem to be that hard (UNLESS that ent-to-ent visibility function uses the visflags, then I'm fked).
2. Aborting on shot > maxDmg. Sheesh! I can't believe I made such a nubbish mistake. I'm moving the entire if statement inside the loop to constantly keep checking the total mock damage vs the dmg_avoid_factor

I scaled the sampling down to 5 checks and now 4x4 is quite playable (still weeding out a bug that makes aliens try to take more shots than they can).

I also have a confession to make: I think I was mistaken in my assumptions (at least partly) about god-o-vision and the vis flags. If I'm not mistaken what I thought as a 'bug' was actually what was holding the AI back in terms of unlimited vision (quite crafty too). So, I worked under the assumption of that and amptly handed out vision to the aliens in my current code. Nonetheless, I do believe we should first see the results of the pending changes, then contrast them vs the 2.3 AI behaviour (with the augmented dmg calcs included). We could also slap on bestAia re-calculation every time a visibility event is thrown (at which point visflags should get recalculated) for some extra spice (for the 2.3 because the current work sees all).

Also, there are no performance issues with multiple calls to the AI_ActorThink() as I thought there would be. Oh well, good ole optimization needed then.
Title: Re: the AI discussion
Post by: shaunc311 on July 08, 2010, 02:47:26 am
So are you looking for a cheap version of G_ClientShoot that does the following(going down the lines in G_ClientShoot):

*ignores the if statements in the beginning that make sure you aren't shooting yourself and you have ammo(ammo check is done in ActorThink)
*probably keep the range check
*ignore the ammo check for multiple fire shots?  Or keep that?
*still calculates the visibility
*call G_GetShotOrigin (part A)
*fires the shots
*does the else part of the last if

Is there anything in shootGrenade or shootSingle that you'd also want to remove?  I think those methods will do B, C and D for you.

We could probably call those shoot methods a couple times to get a window of LOFs if that is more what you are looking for.  Not sure how expensive that is though.


Title: Re: the AI discussion
Post by: dodon on July 08, 2010, 08:30:43 am
Mattn/other people that work on g_combat.c: What's the real use of the mock shots? Can we ninja-in the mock structure (and the shooting code) values that would stop us from having to sample shots?
It's not so much as the base dmg (we could easily estimate that) nor the % to hit (we could use the UI estimation function). It's the LOF issue which we cannot substitute with LOS. And remember, we don't deal with only one LOF, we deal with a 'window' to the target, since we don't always hit spot on and that's where the partial cover really comes into play (I guess we could skip the window calculations in favor of speed without sacrificing a lot).

I encountered the mock shots in reaction fire. 100 of them are used to calculate the chances to hit and the chances of friendly fire. I think there is also a formula to calculate partial visibility.
Title: Re: the AI discussion
Post by: nonickch on July 09, 2010, 01:11:59 pm
I took some time and had a discussion with my brother (who just happens to be into the math we need because of engineering) and he could identify some of the maths we need, without having any compromises.
Let's hope that works out ok, otherwise it's cpu grinding time.
Interesting thing about the RF/nock there dodon. Maybe if we can conjure the math properly we could give a helping hand to that part of the code. 100's of calls to the shoot is probably quite painful to the cpu (albeit in a time when you don't really need the cpu). So the mockshot tries to do something similar like us? Hmmm

Shaunc311: Yes, that's pretty much the stuff we'd have to replicate for shotSingle. For the AoE equivalent, I really didn't look much into it since I did expect it to be more complicated.
Even if we replicate the needed functionality in the AI, the cpu expense will stay about the same (we still need to play with vectors). The gain by copying the code/calcs is ofc not to abuse a system that wasn't meant for our case. I'll go and take a look at the RF code dodon mentioned since it does look a lot like ours (I'd like to see how it handles the out-of-turn player_t struct).

Right, I believe we should wait a few days and see if we can solve it by maths. If not, we can bite the bullet and copy code into the AI (or augmenting mock shot code to fit our needs).

Here's the issue as I described it to my brother, relayed here to make sure I won't ask the wrong things from him:
The point is to accurately (eve semi-accurately if it's faster/easier) find the expected dmg and hit % to a target with the least possible resources. As it is right now, we take 10 shots to the target, avg the dmg, which gives us  roughly a avg_dmg*hit_ratio that fits our needs. <Explaining the trace-rand-retrace mentioned in my last post>. The tracing/intersection math is what's really expensive, and we'd like to limit it down to one. Given that we know the gaussian generation values, can't we just estimate the hit% to the target by taking into account the fact that the rands apply to the equation of the vector?
Apparently this isn't something hard, BUT, we do have issue no.2:
Well yes, but we don't care just for the hit % to the exact target (a point), but to the body of the target itself. So it's many points. And to make matters worse, at any distance from our gun, there can be an object partially occluding the hit window. In this case we want to project the 'shadow' of the object to the target window and subtract the 'target' area by it's shape.
Suffice to say this isn't as simple as the first case scenario as he started foaming a bit.

So, am I missing/forgetting something from my description? It'd be a drag to make him run the math only to find out at the end that I missed an important point.
edit: ah, yeah, we ignore dmg from bullets (non-gravity weapons) since the avg damage is easy to calculate. In explosion/indirect weapons the window suddenly expands and lays on the floor, while projecting shadows now becomes something that I can't even describe with magic markers. Ergh, maybe we can just bite the cpu bullet when the weapon is gravity-based?
Title: Re: the AI discussion
Post by: arakis on July 09, 2010, 01:37:56 pm
maybe aproach it from another angle, insted of calculating 10 shots to a target and getting an avarage, why not read the variables used in single calculation, and estimate from them rthe given percentige. maybe loging  all shots made during the some arbitary game and averiging the results in a fast lookup table?

btw i am not sure the game need the hit calculations to be that accurate I think +-10% is more then enough(for the percentige hit prediction)
Title: Re: the AI discussion
Post by: nonickch on July 11, 2010, 08:58:13 pm
So, here's the current plan with the hit % calculations, WITHOUT support for target occlusion from random things in the line of fire (more on it later):

Given the optimal shot point (the point which G_ShootSingle considers as the target point for a target entity), we really want to calculate the hit probability for every point in space the entity occupies.
We drop the calculation the 3rd dimension (which does lead to less accuracy but much greater performance) and essentially we calculate the hit % for each point in a slice of the target (perpendicular to the shot). In simpler terms, try shooting someone with only one of your eyes open.
So, in essence we have something like this:

----**----
---***---
---***---
--*---*---

legend:
*: points of the 'slice' we care about shooting

Because calculating for each point is a very expensive procedure, we scan line by line for each axis:
Calculate the chance a bullet will land in the first line's RANGE interesting points, and so on.
Then calculate the chances a bullet will land in the first row's interesting points, and so on.

Bear in mind that we calculate for a range, which in this example is for 2 points in the top line, and 3 for the 2nd. Now, we have to translate the range hit % to point hit chances. There proper way of doing that is employing the CDF cumulative distribution functions of a normal distribution. Now, this looks very expensive. Instead we use lookup tables, that are commonly used today to solve such problems by hand (yay, for saved cpu cycles). This is another loss of precision.
So what we really plan on doing is calculate the hit % on the series of the points we're considering and then assign individual hit % according to a weight defined in the lookup table. So, for example, the series of 3 points in the 2nd row end up having a chance of 12%, we give 12/3*lookup_weight on each on.
The reason for having to do this once in the x axis and once in the y is that we have 2 gauss vars in the shooting calculations. Once for each variables. The final hit % of each point is the multiplication of the values it was assigned on each scan.

Up to here we have 0 tracing calculations and quite the lightweight math (avoiding the erf function is a big plus), but we also have 0 idea if there is an actual Line Of Fire. We basically calculate with (an expectedly) very nice precision what would happen if there were no obstacles in the course.
We're pretty confident that up to here we're on a good road, and even though the equations we used were for non-gravity (bullet-based) weapons, the parabola of the grenade/launcher shouldn't be that difficult to implement.
Interesting observation: Ever noticed that many games can only predict up to 95-7% max shot chances? I believe this is a direct result from the gaussian randoms and the  respective calculations (like the ones we use here).

The problem is now occlusion. Given our previous raster-like approach, we need to find out which 'points' are occluded by objects in the line of fire.
Tracing shots for each and every one of the points and then calculating intersections with objects is right out of the question, since that'd probably be more expensive that the sampling that swamps the AI atm.
In theory, we could try create a cone (or pyramid if that's expensive) originating from the gunpoint and having as it's base the 'slice' we're shooting at. Then, find all objects contained/intersecting this cone and project their shape on the slice. All points that are overshadowed get their hit % set to 0.
Unfortunately, not even my brother could think of an easy way to run the math, nor me think of any function I've seen in the code that could handle that kind of intersection calculations.
The simplest thing we've thought of is to trace the shot on the center of the 'slice' (the perfect shot) and depending onto whether that gets through:
a) Consider the shot impossible (even if the occlusion is light and any accuracy deviations would land us hits)
b) Project only that object to the slice and use it's shadow.

I believe (b) is what the gamer sees in the UI (hit % along with the red part of the line if the perfect shot is not possible), and what we'll probably end up with.

EDIT: the cl_battlescape.c/CL_GetHitProbability, but we couldn't grasp... Oh crap, I think that function does exactly what I've been talking about here. Will try to use that instead of reinventing the wheel.

RE-EDIT concerning CL_GetHitProbability:
Ok, so the difference (the ones I can grasp at least) in that function is that it considers the target (the actor for our considerations atm) to be box-shaped.
Is that true in game-terms, or do we actually have the actors have more complicated/non-convex hulls?
From the looks of it, the chance to hit that box is calculated just fine, it's the method of calculating occlusion from objects in the LOF (to the box-target) that drops the accuracy. If you take a look at the function you'll see that it takes 8 prefixed (in width/height percentages) points to the target and traces a line at them. After doing that, it multiplies the hit percentage with the percentage of traces that actually went through.
I don't think it even considers shooting the center of the target (need to double-check), which probably leads to the 1 or 2 occasions that bursts for 95+% chances all miss because they land on a protrusion. Apart from that, all traces are considered with equal weight (in terms of their ability to reduce hit %), which is true for uniform distribution but not for our gaussian: occlusions at the edge of the window have minimal hit % impact compared to something close to the 'perfect shot'.
Unless I copy/pasted something wrong in the AI code (which I probably did), I noticed serious deviations from the sampling code and the predictive code where there is partial occlusion in place (I think that happens a lot when there's that africa-map fence).
Obvious possible semi-solution is to reposition & weight those sampling points.

Since I did look a bit into bsp trees looking for a 'perfect' solution, I was told that it's probably possible to employ the bsp tree while trying to find occluding objects (as in possibly do it fast enough):
Our problem: if the gunpoint is a lightsource pointing at the target-box, find me the parts of the box that are shadowed from random objects in the map.
Now, the basic idea is to traverse the bsp tree and keep checking for objects intersecting the pyramid defined by the gunpoint and the target box. I'll bribe the graphics-geeks with coffee tomorrow and see what they really meant. If this works, it's all up to profiling to show us if the performance hit is worth it (at least we could use it in the UI if it's too heavy for the AI).
A notice on performance: we're already traversing the bsp tree for each sample-trace, so this might end up being faster (don't know how heavy the object intersection math is compared to the line intersections).

Question: what is the edict_t->origin point? Is it the center of the object, or some other predetermined point?

I know I need to at least release the code I'm currently working at, but it has so many leftovers from previous experiments, I'm ashamed to show XD. Will have to cleanup and post.
Title: Re: the AI discussion
Post by: shaunc311 on July 12, 2010, 05:40:40 pm
Finally back online and I had a thought.  

We are using g_trace to make a bunch of lines to simulate %s.  Can we instead make 1 trace but make the end point larger and larger (based on the ACC) as it traces away from the gun almost like a cone?  Then calculate the % based on the surface area of the trace circle when it hits the target.

So if the area of the circle at point X is 400 and the overlap is 200, it would be 50%.  

Then if something is in front of the target, we can figure out which percentage of the circle is blocked.

For example, two objects in front of the target.  The area of the target circle at the first object is 100, and 30% of it is covered.  The area of the target circle at the 2nd object is 200 and it's 15% covered.  The area of the target circle at the target is again 400, with 50% coverage.  Since we still don't know how much of the 30% and 15% are overlapped (but we probably could figure it out somehow) we'd probably say the shot has (((.5 * 400)-(.3 * 400)-(.15 * 400))/400 = (200 - 120 - 60 ) / 400 = 5%

And I just realized that all those 400's cancel out so it's really just: .5-.3-.15 = .05.

Not sure how this would work with grenades though.



Yeah...pretty much ignore this.  I spent a lot of time looking at the trace functions and I'm not seeing a way to do it.  It also is probably oversimplifying stuff.
Title: Re: the AI discussion
Post by: shaunc311 on July 13, 2010, 09:53:01 pm
The main loop in AI_PrepBestAction is also a bit buggy

Code: [Select]
/* set borders */
dist = (ent->TU + 1) / 2;
xl = max((int) ent->pos[0] - dist, 0);
yl = max((int) ent->pos[1] - dist, 0);
xh = min((int) ent->pos[0] + dist, PATHFINDING_WIDTH);
yh = min((int) ent->pos[1] + dist, PATHFINDING_WIDTH);

...
/* evaluate moving to every possible location in the search area,
* including combat considerations */
for (to[2] = 0; to[2] < PATHFINDING_HEIGHT; to[2]++)
for (to[1] = yl; to[1] < yh; to[1]++)
for (to[0] = xl; to[0] < xh; to[0]++) {
const pos_t move = gi.MoveLength(gi.pathingMap, to, crouchingState, qtrue);
                                if (move != ROUTING_NOT_REACHABLE && move <= ent->TU) {

I think I have found a couple problems that might need someone to verify.

The first problem is the border consideration.  It's perfect if the actor is in the dead center of the battlefield.  It is wrong if the actor is by an edge.  For example:

Let's say an actor is standing on the right edge of the map, but still vertically centered.  Instead of having a box with a horizontal side of (tu+1)/2 + (tu+1)/2 (or just tu), we have (tu+1)/2 + 0.  If I'm right, then that is just a simple if/else fix.

The 2nd problem is the last if statement.  I was curious as to how inefficient the loop was so I added an else to it that just counted the number of unreachable moves.  Depending on the position of the alien it was between 200 and 11,000.  I'm unable to figure out why it spikes that high.  My next step was to see how many possible moves there are to see if 11,000 is just noise or a lot and something is wrong.  I might try that tonight.


Update:
Ok, so it is pretty significant.  The aliens are averaging about 8700 impossible moves out of 9248.  One thing I noticed is that the math above assumes 1 TU per move which is optimistic.  Another thing is that it tries all 4 floors, but this map only really has 1 (africa small).

Maybe a better calc would be:

Code: [Select]
dist = ((ent->TU + 1) / 2) / TU_MOVE_STRAIGHT;

This results in 2048 total moves with about 200 being possible (again, just 1 floor out of 4 being possible, and I always assume a straight walk but the real path is probably curved).  This seems to be a significant performance gain.

Update 2:
Debugging this has made me realize that the original /2 was for TU_MOVE_STRAIGHT but never labeled that.  The square needs to be 2TU/TU_MOVE_STRAIGHT to check the farthest the alien can possibly walk in all directions.  That explains the performance boost I saw but it certainly crippled the aliens. 

I still don't like the loops.  Mainly the issue I have now is that PATHFINDING_HEIGHT is always 8, even if a map is only 4 levels tall.  The other thing is that the PATHFINDING_WIDTH is always set to 256 but I think the Africa small map is a lot smaller than that so a lot of movements aren't going to be reachable.  A better implementation would be a sphere around the actor that has a radius of TU/TU_MOVE_STRAIGHT.   

Title: Re: the AI discussion
Post by: nonickch on July 17, 2010, 09:19:16 pm
shaunc311: well, keep playing with it ;)

I was kinda afk the past 2 weeks, so I just came back to my old mess:
the CL_GetHitChance is most of the math I wanted to code, so I opted in copy/pasta'ing it while fixing the very annoying partial-occlusion bug it has (see last post).
Instead of multiplying the no-cover hitchance with the percentage of the 8 tracings that were succesfull, we're going to assign weights each 'hitbubble' that the tracing checks for. So missing the center bubble of the target is way more important than missing one on the edge.
My brother is out there fixing this atm, but we have one very important question we need answered before we can even start:

What point of the 'model' is the edict_t->origin?
Is it the center of the object, or some corner of it?

We kinda need that so we can extrapolate the center of the shooting surface and start creating the boxes.
Title: Re: the AI discussion
Post by: nonickch on July 20, 2010, 02:15:09 am
I'm close to producing something semi-usable (pending the fixes ont he hitprobabilities), and I spent my last day debugging my working copy.

Here's one thing that really puzzled me:
some aliens end up having "ent->chr.reservedTus.reaction>0" while the other members of the reservedTu struct seem uninitialized (like weapon).

AI_FighterCalcBestAction considers using "tu = ent->TU - move", which is fine as long as noone has enabled RF. This ends up in aliens trying to use more TU's than they have while shooting (G_ActionCheck returning qfalse), and even worse, when moving (so they end up a few spots before their cover destination in plain LOS/LOF).
I replaced the usages of "ent->TU" with "G_ActorUsableTUs(ent)", as an immediate patch to this behaviour, but this needs to be looked into more detail as it gimps some aliens (less tu's) without a reason.

For some reason RF keeps giving my alien who's interrupting a chance to play as if it was his turn. Somehow it only works for the alien that is reacting, and I like this one. The problem is that the G_ActionCheck always returns false because "level.activeTeam != player->pers.team". activeTeam is set for the player.
So currently, this RF doesn't work with the AI I'm building unless some changes in trunk in the past 2 weeks adressed this issue. If someone is up-to-date in trunk, try and see if G_ActionCheck's first if statement is ever entered while you're causing a RF.

Also: please consider rolling the 2.3 AI back to it's release version, at least for changes produced in this thread. Lemme play around with the AI in the trunk, and if we deem them to be proper AND STABLE, then we can copy them to 2.3. You could keep the dodon patch if you think it's ok, but I believe it's behaviour is only fitting along with the fixes in the damage estimation.
The mess I created by that 'innocent' code refactorization that got posted in 2.3 introduced a series of crash-bugs. Thank god for geever pointing that out to me the hordes of angry players.

UPDATE
Ok, since the playtime is now below 2-3mins for small maps, haven't crashed in 3 hrs and I merged with latest trunk, I guess you can have a look at my current work copy and see how it feels:
the diff vs trunk 31041 (http://pastebin.com/ZcHe14jP). I have no idea how easy it is to apply those, I spent like 40 mins trying to get mine to work when I was moving to latest trunk.
The entire g_ai.c file if you don't like the patch (http://pastebin.com/hkbzdUHS) You should also add "#define GUETE_DAMAGE_AVOID_FACTOR   0.4" in g_ai.h.
PLEASE DON'T COMMIT THE PATCH. There are quite a few things I need to do before we can consider that one, namely the performance.
Kildor on IRC mentioned that he gets an undefined reference for TR_TestLine on line 165, you can just delete lines 33 up to 203 and get away with it (as long as you don't try to enable math estimation as mentioned later on). Also delete the entire AI_CalculateDamageMath in this case.

Now some general notes on the code: Currently only the sampling damage estimation via G_ShootSingle is in, after applying some sensible optimizations. You can find some blocks of "#if 0/#endif" that drop the main dmg estimation function of 31041 and most importantly the code that enables selection between sampling and math-based damage estimation.
The reason why the math-based (see: CL_GetHitProbability) is out is because:
a) it has some inherent problems with occlusion (as mentioned in previous posts) which are still not fixed
b) I think I did something wrong. The result comparison between those the two estimation methods vary so much, as to imply it.
Uncomment the block before line 500 to see the difference with the math enabled (while commenting out line 500). You may wish to also comment-out 494-497 and all lines referencing testTotalDamage, these are used to compare results from both damage estimations.

You will find many //-style comments, these are things that need to be removed/ironed-out before considering to applying this patch to trunk.

I'd mention the new wonderful things I saw my critters do, but I'm biased and want to hear your own comments. The bad things I've spotted so far that I need to fix are:
* Need to preclude from the G_Vis calculation (next to the HideNeeded) any units we expect to kill. Example: a machinegunner chose to run away and hide from a lone knife yielder that had the misfortune to end his turn 2 blocks away from him. The knifer was also the only one in the building.
* Get rid of the RF TU's that are auto-magically held for some (apparently random) aliens. Waaaait a moment, I do remember that shaken actors 'refuse to let his guard down', maybe this is the perpetrator?

Also, in reference to the dodon patch, I switched the "!HideNeeded || !G_Vis..." to "!G_Vis && !HideNeeded". The reason is that you want to be BOTH non-visible AND non-shootable, and in that order, is because:
a) If you're visible, you're most likely shootable, so don't use the horrendously expensive HideNeeded (I kinda beefed it up a bit)
b) You can be non-visible and still shootable: EXACTLY Behind someone's back (so CLOSE_IN maximizes too). Which was what the critters (melee units mostly) would prefer to do in closerange. It can be 'oh WOW!' if you didn't pay attention on the enemy's turn, but unless we can find a way to only allow this via non-visible flanking moves, it's too silly.

A very precious performance trick that I just thought:
Since the G_Shoot*'s are the performance hitters, and AI_HideNeeded really hammers those mulitple times for each alien, why not create a 'Threat Map' that is common for all the aliens? The only thing mobile in our turn is us, so all the damage an enemy can apply to a certain square is almost (unless we cut a LOF with the body of an alien) fixed (sans the armor protection). Assuming that aliens really consider 200 spots to move (as per shaun's post), they will also check damage received from 8 enemies, with an avg of 3 different firetypes. So 200x8x3=4800 calls to damage estimations (ha! times 10 for the sampling)
Then we should (we actually don't) do the same in the case we shoot-and-move (FindHidingLocation), times the amount of aliens we move == the reason this is still slow.
If we share a 'threat map' common for all aliens, we should be dropping the performance cost of hiding anywhere from X-fold to 2X-fold (where X is the number of aliens we have).
Title: Re: the AI discussion
Post by: Lew Yard on July 20, 2010, 09:59:23 pm

/* evaluate moving to every possible location in the search area,
    * including combat considerations */
   for (to[2] = 0; to[2] < PATHFINDING_HEIGHT; to[2]++)
      for (to[1] = yl; to[1] < yh; to[1]++)
         for (to[0] = xl; to[0] < xh; to[0]++) {
            const pos_t move = gi.MoveLength(gi.pathingMap, to, crouchingState, qtrue);
                                if (move != ROUTING_NOT_REACHABLE && move <= ent->TU) {


Glancing at that snippet, it strikes me that if you're willing to (1) store movement costs and/or paths, (2) ignore movements that aren't from one adjacent tile to another (which would presumably exclude aliens leaping from a ledge to a multi-step drop), and (3) stipulate that aliens use a single consistent heuristic during movement (either maximizing safety, or maximizing speed) -- that you might be able to save some computation by caching.

Under those assumptions, the 'best' movement from one tile to another will likely be the 'best' movement from the source tile to an adjacent tile, plus another movement.  This is false if you aren't willing to assume (3) -- e.g. it might be 'worth it' to walk standing to one location IF that leaves TUs to reach a safer location, while if that intermediate destination were actually the endpoint you'd crouch.    But if you can cache, you might save some computation by more quickly excluding impossibilities (because they're not adjacent to any possible shorter-path tile), and depending on whether the moveLength caches.  If THAT caches, then the above stuff I wrote can mostly be ignored -- I'm basing this just on the loop structure and that it's pathfinding, not on a deep examination of the code base.
Title: Re: the AI discussion
Post by: nonickch on July 21, 2010, 08:28:51 pm

/* evaluate moving to every possible location in the search area,
    * including combat considerations */
   for (to[2] = 0; to[2] < PATHFINDING_HEIGHT; to[2]++)
      for (to[1] = yl; to[1] < yh; to[1]++)
         for (to[0] = xl; to[0] < xh; to[0]++) {
            const pos_t move = gi.MoveLength(gi.pathingMap, to, crouchingState, qtrue);
                                if (move != ROUTING_NOT_REACHABLE && move <= ent->TU) {


Glancing at that snippet, it strikes me that if you're willing to (1) store movement costs and/or paths, (2) ignore movements that aren't from one adjacent tile to another (which would presumably exclude aliens leaping from a ledge to a multi-step drop), and (3) stipulate that aliens use a single consistent heuristic during movement (either maximizing safety, or maximizing speed) -- that you might be able to save some computation by caching.

I'm not sure I'm following your train of thought, then again I only just woke up and my coffee is not even half empty.
Even though I never tried to play with this set of loops (shaunc did), what I get from it is that it sequentially tries to evaluate movement cost to any tile within a box surrounding the alien. It doesn't calculate each step separately. So, if you're standing at the top of the stairs, the bottom of the stairs might be 4 TU's away (2 tiles),  but reaching the tile at the bottom of the drop next to the stair is only 2 TU's (one step into the air, then drop). This is all ofc done from the pathing algorithm which does consider drops. I've seen both soldiers and aliens casually drop from ledges instead of going for the stairs (actually getting them to use the full length of the stairs is kinda tough). As for (3), we don't use any heuristics, we calculate the full set of possible moves but we never even know the series of steps we'll take to reach the destination. The pathing might be using some kind of heuristic, and it definitely seems to be producing the fastest path to a destination (as I've seen the code laced with comments/ideas about checking the visibility/RF-exposure while we walk).


Under those assumptions, the 'best' movement from one tile to another will likely be the 'best' movement from the source tile to an adjacent tile, plus another movement.  This is false if you aren't willing to assume (3) -- e.g. it might be 'worth it' to walk standing to one location IF that leaves TUs to reach a safer location, while if that intermediate destination were actually the endpoint you'd crouch.    But if you can cache, you might save some computation by more quickly excluding impossibilities (because they're not adjacent to any possible shorter-path tile), and depending on whether the moveLength caches.  If THAT caches, then the above stuff I wrote can mostly be ignored -- I'm basing this just on the loop structure and that it's pathfinding, not on a deep examination of the code base.
Crouch is so not considered in the AI, it makes me sad. The only chance for the alien to crouch is if it ends it's about to end it's turn and it has enough TU's. On the next round it will always stand up before moving :(. At least shooting from a crouched position is one of the 'todo' things on my list. Moving while crouched can only come into play after that, as moving somewhere crouched, only to stand up and shoot, is always (not considering safety) counter-productive.
The current structure of the aiAction_t, is pretty simplistic atm. It can only store:  [mov],[shoot],[mov]. Each one is optional, and moves only define coords, not crouch status. As you can see, there's very little that can happen with this struct, and upgrading it to a linked list of generic actions is a must for any serious additions. There's a light exception to this struct, but it's not worth mentioning since it's almost the same behaviour.
If you check a few lines above the quoted code snippet, you'll see a G_MoveCalc call. I'm not 100% sure, but I believe that it precalculates the 'pathingMap' for usage from our team. So in essence every call to gi.MoveLength should be locating the appropriate cached value rather than re-calculating pathing.

update: I looked into the move algo's because of a crashbug, and I saw: "/** @todo add heuristic for A* algorithm */", so it sounds we got no heuristic for that either. Anyone wanna go for it? Pathing doesn't seem to be the issue atm, but aliens hugging the hulls of ships needs pathing calculations for a greater area (we need to use pathDistance instead of vectorDistance). A complete search would smack us face if we expand the 'searchbox' to a needed size, so finding out the proper heuristic would be a very nice addition.
Title: Re: the AI discussion
Post by: Lew Yard on July 22, 2010, 12:44:39 pm
Oh, was mostly musing whether it'd be possible to incrementally consider interesting states (e.g. compute what one can do with 2 TUs of movement, then 3 TUs based on those, then 4 TUs based on the prev. TUs) but that'll only be useful in a highly constrained environment (e.g. interiors) where much of what's in a large cube won't actually be reachable.   Precomputation of pathing presumably makes this moot, anyway.

Speaking of heuristics, a quick look at AI_FindHidingLocation suggests to me that the loops there will return immediately when it finds the first spot within a grid that meets the reachability lack-of-visibility criteria -- and that the search operates in a row-by-row (or column-by-column, one of these) ways. 

Seems that this would bias towards a particular direction (towards wherever pos[1] is smallest, I think), and perhaps towards using more TUs  than necessary (and leaving fewer for reaction fire)?   A search organized ala concentric squares of increasing radius around the actor's location would be slightly more complicated but might result in a shorter move leaving more TUs for RF, crouching, reloading et al; while trying to bias direction towards or away from the nearest enemy (depending on weapon range vs current distance, or health?)  might be more tactically interesting.
Title: Re: the AI discussion
Post by: nonickch on July 22, 2010, 02:56:08 pm
You are correct on the AI_FindHidingLocation point.
Wanna play around with it? I'm tied with the shooting & bugs atm.

Bear in mind that LUA uses that function aswell, we can adjust for any changes (e.g parameters) you'd want in the C AI, but I have no idea what lies in LUA-land.

Also, this function only checks visibility and not shootability, which may lead to silly things like 'hide behind someones back' (common for mellee units). Now, I wouldn't suggest putting the AI_HideNeeded there aswell. It's heavy as it is atm.
Gah, I'm torn again on the G_Vis AND/OR HideNeeded combination. The OR is clearly slower (atm, because of mockshots), but the AND doesn't let the aliens shoot from positions that are visible yet can't take that much fire back (e.g. corners).
Title: Re: the AI discussion
Post by: Lew Yard on July 23, 2010, 02:57:29 am
Hm, 'k.  Will see about getting my build environment set up.  Haven't attempted to build the tree, just been meandering through the code w/ Emacs.  :D
Title: Re: the AI discussion
Post by: Lew Yard on July 24, 2010, 09:07:51 pm
http://sourceforge.net/tracker/?func=detail&aid=3033378&group_id=157793&atid=805244 (http://sourceforge.net/tracker/?func=detail&aid=3033378&group_id=157793&atid=805244)

'k.  This includes a patch which is a pretty incremental change.

Known assumptions and limitations retained:

- Only destinations in the same plane are considered.   This was present in the replaced code.

- The shortest path to another point in the same plane is fully within the plane, and includes tiles which are closer by Manhattan distance.  This is both new and incorrect in some cases (maps making heavy use of stairways et al could provide shorter paths in terms of TU); where it is wrong, the increasing-ring approximation is more likely to be wrong.

Facets changed:

- Average path length taken should be reduced, since locations are considered in order of Manhattan distance to alien.

- Bias towards y=0 should be reduced for the same reason; bias occurs, but within the expanding diamonds, so one hiding spot would be preferred to another with a lower y if the Manhattan distance is less.



Now, with respect to 3D -- because of falls et al, it seems possible that it might be faster to descend further sometimes (unless height changes cost TU even when caused by gravity) -- have not checked this.  With this in mind it seems like a BFS or A* traversal of node to node might be much preferred to iterating over grid patterns.   e.g. something with a signature ala


triplet_list_node_t ** bfs(int maxTUs, byte x, byte y, byte z, some_map_object_type_I_dont_remember_right_now_t * foo);


where the return value is an array of length maxTUs (if we ignore 0-cost moves, which don't exist right now AFAIK), whose contents are pointers to linked list nodes (or, alternately, a structure type with an array pointer and a length field) w/ X-Y-Z triplets with a shortest-path cost of exactly the corresponding TU.  This would codify a bias towards shortest paths, which might be absurdly bad (like a civvie's "escape" route passing next to a visible alien with a Kerrblade),  but it might be useful in quite a few functions.    This would also better facilitate hypothetical silliness like a hypothetical April Fools' map in which everybody moves like knights in chess ;) (or more seriously, maps with teleporters, varying gravity for locally different movement costs, et al).

I haven't yet looked at the map / routing code to see how readily it'd be implementable.  For something like the above, what would need to exist would probably be something like "from this spot, where can I get to in 2..max(min(number of TUs required to actually make a move))".   Doors are another consideration; if the code that computed movement cost ignored the possibility of opening doors, that'd need to be handled separately; if it did, then that TU range for the search would need to consider cost of door opening + cost of moving through that door.

Title: Re: the AI discussion
Post by: nonickch on July 31, 2010, 04:26:35 pm
gonna go afk for another 2 weeks because of vacations
Title: Re: the AI discussion
Post by: Mattn on July 31, 2010, 04:38:34 pm
that's a pity - as i'm back now ;)

anything we should apply to trunk? or do you want an own branch to work on?
Title: Re: the AI discussion
Post by: nonickch on August 01, 2010, 04:38:07 pm
Here's the svn diff of my current status (http://pastebin.com/KFaBZYe7)

Not much has changed since the last diff.
I've switched the damage estimation to the mixed model (mock for gravity/splash, math for the rest). This leads to normal execution times (as long as there aren't many such weapons being held on the field).
Ofc since I did squat of a progress on the math estimations, some of it's "features" are still there (e.g I just had aliens try to shoot through walls. In my defence my squaddie was getting non-0 hitchances too).
Enable the commented-out "#define MOCK_ESTIMATION_ONLY" if you want to see the difference of mixed estimation vs mock-only.
You do get interesting things happening with mock-only in the rivertown map (see bug report (http://sourceforge.net/tracker/?func=detail&aid=3032352&group_id=157793&atid=805242)).

It's possible that you will get an unreferenced function error at compile time, include the appropriate .c file. Somehow my linker has no issues with it (probably because I used to have it included sometime in the past?).

You may/may not decide to commit this to the trunk. The problem is I won't be around till the 18th to fix any broken bits that come up.
Title: Re: the AI discussion
Post by: Mattn on August 01, 2010, 07:17:20 pm
i think we'll wait with commiting then ;)

having someone who can answer my questions would be good when it comes to bugfixing.
Title: Re: the AI discussion
Post by: dodon on August 05, 2010, 09:12:01 pm
I tried the new ai code. The aliens are  now thougher (at least with MOCK_ESTIMATION_ONLY).
Only the behaviour of the bloodspiders lacks. I think we need different GUETE-values for melee only units.

To compile the code with C::B on windows I had to remove some "error messages" from tracing.c

From the description of G_ClientShoot: @return qtrue if everything went ok (i.e. the shot(s) where fired ok), otherwise qfalse.
That means if  G_ClientShoot returns false you can't shoot that target. So this optimization of the AI_CalculateDamageMock reduces the time for the alien turn noticably (a similar break is in the mock shot of the reaction fire):

Code: [Select]
/**
 * @brief Estimate the damage of a shot (given firedef and shoottype) by making mock shots with G_ClientShoot. This function properly calculates for
 * gravity & splash-based weapons while being the most accurate. This comes at a heavy processor cost though and should only be used where really needed
 * @sa G_ClientShoot
 * @param[in] ent_player the player who's actor will try to shoot
 * @param[in] ent AI actor that is trying to shoot
 * @param[in] The target we're shooting at
 * @param[in] The shooting type we're using
 * @param[in] The firedefinition index we're using
 * @param[in] The firedef we're using. Redundant because of fdIdx and used only for minimizing sampled shots.
 * @param[out] the damage expected from the shot
 */
static int AI_CalculateDamageMock(const player_t * ent_player, edict_t * ent, const edict_t * check, shoot_types_t shootType, fireDefIndex_t fdIdx, const fireDef_t *fd)
{

int mockShotDmg = 0;
int j;
const int samples = 10; /** Try to make only this many G_ShootSingle shots in order to keep CPU usage to a minimum */

/* Sample 5 mock shots through the proper G_ClientShoot to see what kind of damage we can deal with our current settings */
/** @todo OPTIMIZE, we're really having a HUGE bite into the cpu over here */
/** @todo: is the z-align value of 0 correct? */
/** @todo: allow mock.allowSelf=1? */

/** If the firedef is already shooting many times, divide by fd->shots so we get about 10 samples only */
const int iterations = ((int)(samples/fd->shots))?samples/fd->shots:1;

/** Only aliens are pleased in damaging civilians */
const int shootCivilians = (ent->team == TEAM_ALIEN);
float civilianPenalty = 1.0f;

for (j = 0; j < iterations; j++) {
shot_mock_t mock;
memset(&mock, 0, sizeof(mock));
if (G_ClientShoot(ent_player, ent, check->pos, shootType, fdIdx, &mock, qfalse, 0)) {
/* Add the total damage expected */
mockShotDmg += mock.damage;
/* Remove the fraction of the damage that is done to us/our team and civilians (if we're not aliens)*/
mockShotDmg -= ((mock.friendCount + mock.self) / (mock.enemyCount ? mock.enemyCount : 1)) * mock.damage;
civilianPenalty = shootCivilians? 1 : mock.civilian/(mock.enemyCount? mock.enemyCount : 1.0f);
mockShotDmg *= civilianPenalty;
} else {
break;
}
}
mockShotDmg /= iterations;

return mockShotDmg;
}
Title: Re: the AI discussion
Post by: Mattn on August 07, 2010, 07:28:10 am
i've updated the patch tracker at https://sourceforge.net/tracker/?func=detail&aid=3037926&group_id=157793&atid=805244 with a new version (applied coding guidelines, added some todos and cleanup)

please use this patch for basing further patches on
Title: Re: the AI discussion
Post by: Kobold on August 07, 2010, 07:36:47 pm
the AI changes available for 2.3 branch or only for 2.4dev?
Title: Re: the AI discussion
Post by: Legendman3 on August 14, 2010, 05:08:52 pm
Only for 2.4dev. Does anyone mind if I give it a try and make a lua file for soilder AI? Just want to see if anyone minds What Im trying.
Title: Re: the AI discussion
Post by: Legendman3 on August 25, 2010, 04:12:38 am
Its been 10 days... no reply... here you go its a cobination of the civilian and alien lua files
Title: Re: the AI discussion
Post by: Mattn on August 25, 2010, 07:57:17 am
thanks a lot - i will have a look
Title: Re: the AI discussion
Post by: nonickch on August 28, 2010, 09:54:22 pm
back, I just couldn't stay on that island for just 12 days...
Title: Re: the AI discussion
Post by: Mattn on August 29, 2010, 03:58:00 pm
welcome back nonickch
Title: Re: the AI discussion
Post by: Kobold on August 30, 2010, 04:58:58 pm
is that possible to paste the AI changes for 2.3 too?
Title: Re: the AI discussion
Post by: bayo on August 30, 2010, 07:43:05 pm
I dont think so.
Title: Re: the AI discussion
Post by: nonickch on August 31, 2010, 01:30:38 am
kobold: if you're willing to test this extensively with 2.3 and the campaign, I can most definitely produce a patch for 2.3 so you can try it out.
Most of the changes are in the ai file itself.

On the subject, what's wrong with the bloodspiders (I haven't really played since I came back)?
Title: Re: the AI discussion
Post by: Kobold on September 01, 2010, 11:50:18 am
kobold: if you're willing to test this extensively with 2.3 and the campaign, I can most definitely produce a patch for 2.3 so you can try it out.
Most of the changes are in the ai file itself.

On the subject, what's wrong with the bloodspiders (I haven't really played since I came back)?

that sounds good, yeah i would try it. if you could upload this patch/file would be cool. thanks
Title: Re: the AI discussion
Post by: jerikojerk on September 03, 2010, 11:52:06 pm
on the theme of alien cheaty vision. i don't care if you remove it. As a alien, i just follow my nose 'cause you human stinks.


@shaunc311, i read you say here are/was up to 10000 possible moves. i don't understand why. if each alien go 30 TU they have max 2*15*2*15=900 new possible positions each (in fact less cause here are corner that are too far away) on void map. I think you can prune many of this possibility.


I read the most of the wall discussion, so i will explain how true alien plays.
about moving alien

That's a basic in military alien, there are secured places and unsecured places. At the beginning, it's impossible to find human in a harvester. If we lose the advantage of god-vision (that was not god vision, that's alien advanced detection), you should make some map marking of where human bastards can be found and where they can't. it's just some probability... over a box. With that aliens will not have to look forward at each turn. Here should not be many level, to start with. 0% is i'm sure where is no enemy, cause i see i or cause i control all the access to go in (that's the funny thing to program), but it can be realy easy cause we alien arrive first so all the map could be set at 0%, then we see your dropship.  it makes noise, heat and it stinks. Next round you put a 50% in all human accessible position. You force 0 in position where you see human are not, 100% where they stand.


We aliens have observed that if we loose, it's because human go at us.. so this is a good guess to improve (unvisible) human estimation by saying human are most likely to come at us than to go in opposite direction. Even an Ortnok beast understand this.

about fire

When we alien open fire, we have a goal. There a to kind of goal: to wound (*** the medikit) or to kill. The wound kind of fire should be used when reenforcement are available (so to say there is a living alien how can be here in 1 round).

There are optimized place to fire. before moving a alien you can make a map crossing number of fire, hit probability (staying and crouched, current AI ignore the fact that we can fire crouched) It's hard to teach an Ortnok that the more it walks, the less it fires. If we are on a hit and run fire, with that system you could go to quit cove, walk to get higher success rate, and retake cover.

If we are on suicidal mission, we can go to the best place to fire the best shot and praying not to be shot back.

I you allow us plan next round... we may do more sophisticate things.

about team playing
There is a fantastic pun in french how says "team's spirit, it's one spirit for one team". It explains how good  French football team is. Alien can mimic this. And it's military basic too. alien associate when they are less than 5 boxes away AND  whith a maximum number of participant (3 aliens/groups sounds good).

how work a group? (it remains me my military classes), the speed of the group is the speed of the slowest, the group try to move together. There is one leader with chose the targets and the (2) others shot at it until the human die. The good new it that those two alien will not have to call the think() function, computer friendly idee. The will just obey the order. And for us, Alien, orders are absolute.  it remains me my military classes, i said.

As a cheat, it could be usefull for alien to see how wealthy are the human. it could allow us not to keep our own record of the number on damages needed to kill one human and how much the human already get. i know, medikit are human way of cheating.

i explain one other point, i previously said that some other alien could request support. My idee on the point is to alert a group of alien not to far, which are not engage in fight, to come quickly. To avoid human get used to set up traps, the alerted alien should refuse the support from time to time, or chose if he gets two alert from two distinct groups.

on sheevar
i just noticed someone say, hoho givind god-view to sheevar is ok. Don't forget that, we alien have hive spirit (not shure of translation). What one alien knows, all the aliens on all the galaxies know. If one Sheevar gets god view, we all get it.


on spides
Bloody AI makers, robots are expensive. as of rev 32055, spiders are fearless and run right in front of human, but don't attak cause they have no more TU. Now all the human rebels have plasma swords. The very first they do, is to destroy them, it costs them 2+7 TU. They can all keep on shooting. There are two solutions, first you give more TU to spiders or you don't send them as long as they can't make one attack.


on human learning technics
There are few things that can make alien adapts to the player. I think there are two kind of players, there are players how allow causalty and players how don't. if we can look in the savegame how much dead soldiers there have been (the count, the average and maybe the standard deviation ) we can guess if we have a player who will get frustred if aliens kill too much soldiers and restart the game. In this case, aliens should prefer suicide mission.

On the other hand, alien can spy human activity and politics. I think there are cases where trying to game over the player with a too much civilian loss is relevant. (i never get this, i just see it in translation strings).

--
i'm just giving ideas, don't know what the api is capable of.

Title: Re: the AI discussion
Post by: mbu on March 11, 2011, 07:39:03 pm
Hi,

I'm new in this forum and first i would say: From the short time I was playing this game it was great!

I've reading this post and I had an other way which might make the calculation of the hit probability faster. The problem could be solved by the graphic card very fast. It's made to make this kind of calculations. So when you are able to abuse it a little bit (I didn't read the source code, nor i know if we can access it in this way) its just adding a flashlight from the shooting player pointing to the targets boundary grid. Then summing up the values of the targets grid triangle edges illumination values should give a quit accurate hit probability (deviding by total light). To increase the precision of the calculation, one can increase the targets grid by additional points.

It's just a thought how it can be done with "invisible" light. It would be much faster for an often used method and very exact.

An other, nearly the same way is to let the screen show from the gun point of view with a view field which shows exactly the target, make the target a special color(lets's say red), make a in-memory image and show how many pixels of this color you have (in this case the amount of red images).
I hope we can abuse our engine this way. I'm not in graphic programming nor have I programmed c in the last years so i just can't do this myself.

It will be the fastest way to calculate the hit probability for a target, because the graphic cards are optimised to do exactly this calculation.
Of course this is just the rough description to give the idea.
Title: Re: the AI discussion
Post by: yobbo on June 26, 2013, 07:17:51 am
So, it's been really bugging me that the aliens clump up in places where they can't actually reach the player's soldiers.

It seems their positioning heuristic takes into account the absolute distance to the nearest soldier, so failing to find anything better to do, they minimize absolute distance, thus standing in corners "close" to the soldiers, and hovering above or below them on the wrong floor.

I have a proposed fix for this, but I haven't looked at the code yet, so I'm not sure if it makes sense.

My proposal is to calculate (once per turn, after the player has finished moving) the distance to the nearest soldier for every point on the map. I assume there must be some data structure in place already representing the movable area of the map, so this could be walked over for each soldier. Not too much of a computational burden as it's only done once per turn.

Thus in stead of taking into account absolute distance to soldiers, the aliens could take into account the actual movable distance, based on this calculated distance map.

It can be explained by saying the aliens can "smell" the soldiers ;).

In terms of the AI, I assume this is a fairly simple fix. Just change the distance metric to instead use the actual calculated walking-distance.

So I have two (and a half) questions, answers would help if anyone knows off the top of their head:

1a) Is there a movement grid structure I can easily walk over with a simple distance calulating routine?
1b) Where is the best place to store the resulting distance data? It needs to be recalculated each turn, and accessable by the alien AI.

2) Where do aliens decide that "closer is better"?

I can probably find the answers to these questions myself, but it will take some time to familiarize myself with the code. I figured I'd post my proposal and ask my questions while I do so.
Title: Re: the AI discussion
Post by: Mattn on June 26, 2013, 11:12:48 am
they are not taking the absolute distance to another solider - but the real path distance. BUT our max path distance is sometimes too short. E.g. an alien is not able to walk out of an ufo on the back if a soldier is standing in front of the ufo (can be seen in +africa). The movement data is already cached. But it's too short. If they behave differently like this - this is a bug. They should not use the absolute distance at all - as this is no statement about the reachability. If you see a part in the code that does this, please let us know.
Title: Re: the AI discussion
Post by: Mattn on June 26, 2013, 11:20:21 am
to elaborate a little bit more. What we need would be a memory about the last movement and why the ai did it. E.g. a target is too far away and we tried to get closer - we should try to get even closer in the next turn - and not walk back to the initial location. The behaviour based on scores (see g_ai.cpp top for the score defines)

oh, and to answer your second question (and maybe also your first question):
* We do have AI_PrepBestAction and AI_SearchBestTarget

for the caching we have the Grid_MoveStore
Title: Re: the AI discussion
Post by: DarkRain on June 26, 2013, 06:11:11 pm
they are not taking the absolute distance to another solider - but the real path distance.
Sorry, but that's wrong, the AI does use the absolute distance - see the AI_[Fighter|Civilian|Panic]CalcActionScore functions: they are clearly using VectorDist/VectorDistSqr to get the distance to other actors.
Title: Re: the AI discussion
Post by: Mattn on June 26, 2013, 09:52:24 pm
Oh yes - you are right - and this also makes sense :o . we need a line of sight and a distance. but an alien should not try to walk to the closest actor if this actor is not reachable by walking or not even visible.

@DarkRain: didn't you fix the ai-walk-to-invisible-actor-because-he's-closest bug already?
Title: Re: the AI discussion
Post by: DarkRain on June 27, 2013, 12:28:51 am
Nope, the AI is still unable of navigating a semi-complex map to save their lives. Duke was looking into it not long ago, and I remember that using the routing/pathing data for this was already discussed (and seemed the best idea), the thread should be around here in the forum.
Title: Re: the AI discussion
Post by: yobbo on June 27, 2013, 08:37:25 am
I couldn't find any previous discussion of using the routing info to solve this. If you could find a link for me that would be helpful.

So far it looks like this solution is complicated by movement depending on the actor doing the moving. It looks like the code can handle things such as smller actors moving thru smaller holes while larger ones can't.

That and the routing code is doing my head in.

Particularly
Code: [Select]
/**
 * @sa G_RecalcRouting
 */
void G_CompleteRecalcRouting (void)
{
Edict *ent = nullptr;

while ((ent = G_EdictsGetNextInUse(ent)))
if (IS_BMODEL(ent))
G_RecalcRouting(ent->model, GridBox::EMPTY);
}
Why is the routing info recaculated for every entity in use? Shouldn't there be just one routing table? Why does it relate to specific game entities at all? Am i misunderstanding?
Title: Re: the AI discussion
Post by: H-Hour on June 27, 2013, 11:10:54 am
Why is the routing info recaculated for every entity in use? Shouldn't there be just one routing table? Why does it relate to specific game entities at all? Am i misunderstanding?
My uninformed hypothesis which someone else will likely correct: I think the game may only be calculating the routing within a certain radius of each entity defined by their max TUs.
Title: Re: the AI discussion
Post by: DarkRain on June 27, 2013, 08:17:42 pm
My uninformed hypothesis which someone else will likely correct: I think the game may only be calculating the routing within a certain radius of each entity defined by their max TUs.
Let's not confuse routing with pathing,

Duke is the one to ask about this but let's see if I can shine some light on this:
there is indeed only one routing table (well actually two: server and client side each have one), and as you can see G_CompleteRecalcRouting only recalculates routing for BMODELs which are part of the map (doors and breakables AFAIK) every other entity is ignored here, I don't know why is it needed but as I understand G_CompleteRecalcRouting is called once after loading the map, maybe it has to do with our random maps, as they need to be rerouted after assembly?

Also I can't seem to find the previous discussion, maybe it was on IRC instead...

Title: Re: the AI discussion
Post by: Duke on June 27, 2013, 10:33:47 pm
Hi yobbo,
it seems that the helpful members of the dev team have already at least *mentioned* the answers to all of your questions. Especially DarkRain's guesses are very close to the truth. Not sure if those answers have been explained well enough so a newbie to the ufoai code can easily understand them. So feel free to ask :)

Let me add some points:
- both routing and pathfinding tables consume quite a lot of memory
- routing is cpu-intensive
- pathfinding is not, BUT
- the AI-pathfinding is very costly because of the additional stuff (hiding, shooting, RF)

For the situation mattn mentioned (africa) I coined the term 'The harvester problem'. There is an item in the bug tracker about this.

The major AI problems are:
- AI is calculated per alien ==> NO team strategy
- AI is calculated per turn ==> NO multi-turn strategy
- AI thinks in 'cells' (== one x,y,z place) as it's biggest unit ==> NO strategic view on the scene

Keep in mind that if we manage to make the AI much smarter, we'll have to rebalance the whole game.
Title: Re: the AI discussion
Post by: Telok on June 29, 2013, 08:26:33 am
I prefer to think of the current AI as a placeholder for when the real AI is implemented.

AI from scratch

1: check health/morale and set status
(different thresholds for different species, % of health/morale combo?)
cower, flee, insane, berserk, normal
examples:
bloodspider [flee 0-15, berserk 16-40, normal 41+]
taman [cower 0-5, insane 6-10, flee 11-25, berserk 26-55, normal 56+]
ortnok [flee 0-7, insane 8-16, berserk 17-45, normal 46+]

2: implement status per alien
examples:
normal= get weapon, move to comfort zone, do job
flee= maximum movement away from enemy and out of LoS
berserk= get weapon, move closer and/or attack nearest enemy in LoS with preferred attacks

comfort zone: a set of parameters per alien species defining things like minimum and maximum distances to allies and enemies, attack preferences (most attacks, highest damage, highest hit %), target preferences (civies, objects, players, nearest enemy, best % to hit enemy, enemy nearest death), concealment and movement preferences.

The distance to allies and enemies is to be a simple way to diversify AI tactics. Giving bloodspiders low minimums and maximums will make them clump up and rush the player, if hovernets have high numbers they will spread out and stay at distance. A high minimum and low maximum will make them stay at range from each other but move as a group, while no minimum and a max higher than the map size will let them wander at will. Combine with movement and concealment preferences to generate more dynamic action, hovernets and shevvar use 25% TUs moving, taman and ortnoks always reserve TU for RF, taman and bloodspiders always move out of LoS if they see more than 2 enemies.

The distance thing also allows for a check to the nearest ally/enemy, develop a path to it, path as far along that path as possible. This would be a troubling "perfect enemy information" thing normally but the telepathy virus/parasite allows it to work for the aliens knowing where they are in relation to each other. If need be we can use that for them finding the player or mock up a ranged brain wave scanner of some sort (improved IR goggle type thing or psi-detector that finds humans?) that allows this.

Pros: Different aliens show different tactics, solves harvester pathing issue
Cons: Needs flowcharting and programming that I'm not up to
Title: Re: the AI discussion
Post by: Sandro on June 29, 2013, 04:03:57 pm
This is pretty much how current AI works.
Title: Re: the AI discussion
Post by: yobbo on July 07, 2013, 07:08:54 am
Sooo, after three failures that really should have worked, and now that i've cooled down from the resulting tantrum, i figure i'll just post what i wanted to do and hope someone can point out the best way to actually do it.

Here is some code that does not work for various reasons.

Code: [Select]
typedef struct enemy_distance_s {
/* Approximate distance in TUs to the nearest enemy (capped at 255) */
byte distance[PATHFINDING_HEIGHT][PATHFINDING_WIDTH][PATHFINDING_WIDTH];

inline byte getDist(const pos3_t pos) const {
return distance[pos[2]][pos[1]][pos[0]];
}
inline void setDist(const int x, const int y, const int z, const byte val) {
distance[z][y][x] = val;
}
inline void setDist(const pos3_t pos, const byte val) {
setDist(pos[0], pos[1], pos[2], val);
}
} enemy_distance_t;

/**
 * @brief Calculate the distance to the nearest enemy for every point on the map.
 * @param[in] routing The routing map (either server or client map).
 * @param[out] enemy_dist The map of nearest-enemy distances.
 * @param[in] team The AI team we're calculating this for.
 */
void CalcEnemyDistances(const Routing &routing, enemy_distance_t *enemy_dist, const int team)
{
/* Clear the distance grid: set all distances to max.
* (this is probably about 512KiB of data to maxify) */
for (int z = 0; z < PATHFINDING_HEIGHT; z++) {
for (int y = 0; y < PATHFINDING_WIDTH; y++) {
for (int x = 0; x < PATHFINDING_WIDTH; x++) {
enemy_dist->setDist(x, y, z, (byte)(-1));
}
}
}

/* Maintain a list of cells to be checked. */
std::list<pos3_t> cells_to_check;

/* Start with distance 0 at all enemy positions. */
edict_t *ent = nullptr;
while ((ent = G_EdictsGetNextLivingActor(ent))) {
/* Assume we are hostile to members of all other teams.
* (ReactionFire::isEnemy seems to work this way also) */
if (ent->team == team) {
continue;
}
enemy_dist->setDist(ent->pos, 0);
cells_to_check.push_back(ent->pos);
}

int updatedCells = 0;

/* Check the surroundings of each cell,
* to see if we found a shorter path.
* If so, update the distance map, and recurse. */
while (!cells_to_check.empty()) {

std::list<pos3_t>::iterator cell;
for (cell = cells_to_check.begin(); cell != cells_to_check.end(); cell++) {
int dist = enemy_dist->getDist(*cell);

for (int dir = 0; dir < PATHFINDING_DIRECTIONS; dir++) {
/* Get the cell leading to this cell in this direction. */
pos3_t from_cell;
int crouching = 0;
VectorCopy(*cell, from_cell);
PosSubDV(from_cell, crouching, dir);

/* Is it connected?
* For now don't take into account size etc,
* just if there is a grid connection. */
if (!routing.getConn(ACTOR_SIZE_NORMAL, from_cell, dir)) {
continue;
}

/* How many TUs does it take to move from there to here? */
int tustep = Grid_GetTUsForDirection(dir, 0);

/* Have we found a shorter distance? */
if (tustep + dist < enemy_dist->getDist(from_cell)) {
enemy_dist->setDist(from_cell, tustep + dist);
/* Add to the list to be checked next cycle. */
cells_to_check.push_front(from_cell);
updatedCells++;
}
}

cell = cells_to_check.erase(cell);
}
}
}

The most obvious reason this does not work, is that std::list doesn't know how to deal with a pos3_t. That's easily solved by defining a new 3-pos struct to use here and converting to and fro, but it's not the main problem.

The main problem is that it seems this combination of things i want to do is unable to be called together from anywhere in the codebase. Specifically G_EdictsGetNextLivingActor and Grid_GetTUsForDirection. In grid.cpp it can't call the G_ function, and in g_ai.cpp it can't call the Grid_ function.

Maybe i'm wrong about that, and there's some way i can easily do this, but in the end i still have no idea where this function should go.

Probably it wants to be called from G_ClientEndRound? And the enemy_distance_t must be made accessible to the AI somehow. The routing table needs to be accessible to whatever is calling this function, and enemy_distance_t needs to be declared somewhere everyone can see it.

It's possible there's a better way to organize things, but i've already tried integrating this in three different places unsuccessfuly, so i really hope someone can look at this and tell me where and how it can fit in.
Title: Re: the AI discussion
Post by: DarkRain on July 09, 2013, 05:25:30 am
You should be able to call the GetTUsForDirection function via the game_import_t function pointers (See for example how G_MoveCalcLocal, G_ClientMove and co work)

On the other hand, IIRC (most?) std containers are not to be used, so that might be a problem
Title: Re: the AI discussion
Post by: Duke on July 15, 2013, 12:55:24 am
yobbo,
if you want the Tus for the path to the nearest enemy, you should take a look at Grid_FindPath.
Title: Re: the AI discussion
Post by: Rodmar on October 10, 2015, 01:56:00 pm
Just two ideas in case the alien AI ever become too efficient, so that all the weapon balance should be retouched:

1/ Degrading moral and AI:
We may consider that, the less the Aliens on any map, the dumber they become (9 of them in a base containment unit (thus perhaps insulated from the outdoor) are like animals).

Thus, inside a Battlescape mission:
- the AI could be degraded the more Aliens are killed.
- the moral management could be made unbalanced in favor to the PHALANX, especially the loss of moral when an Alien is killed, and that would cause the last alien survivors to become insane even more easily as for now (perhaps, currently, they become mad only when they are wounded and they killed one of them).

2/ Toxic UFO wreckage: Currently, there are no Aliens who start the mission already wounded. We might give the alien team some initial handicap on crashed mission by attaching a wreckage model a few perma anti-alien gas fields inside and around it (toxic leaks). Any alien exiting a wreckage would be already partially stunned and demoralized. Of course, it should be tested whether these fields don't prevent too much the Aliens to move and act.
Title: Re: the AI discussion
Post by: ShipIt on October 10, 2015, 02:43:33 pm
... Currently, there are no Aliens who start the mission already wounded. ...

This is not true.