UFO: Alien Invasion Issue Tracker
Go to the previous open issue
Go to the previous issue (open or closed)
Please log in to bookmark issues
Open Bug report #2628 FPS drops extremly in some missions
Go to the next issue (open or closed)
Go to the next open issue
mcr2010 (mcr2010) has been working on this issue since January 28, 2013 (20:47)
[http://sourceforge.net/p/ufoai/bugs/2628 Item 2628] imported from sourceforge.net tracker on 2013-01-28 19:43:45 System is AMD64 Debian Lenny, with Core2Duo 2x2.4Ghz and 2GB RAM, Intel 965G graphic card, 1024x768 resolution. rev 31039 (and many revisions at least several months back). For example, in skirmish "+industrial small", with default settings (apart from "+set r_programs 0" to get it to work at all), it gets extremely slow when maps get zoomed out so whole battlescape is visible, dropping FPS to 2 (two) or lower, making it completely unplayable. Zooming in all the way (and few steps back) restores FPS to 50 (but one cannot play that way). Lowering texture resolution cap and lightmap block size to low does make some improvement, "boosting" the zoomed-out FPS to (still extremely annoying) "6". Other options that can be set in menus does not seem to make much difference. When the slowness happens, even the mouse movement becomes extremely chunky (moving the mouse moves the mouse cursor half a second later), and I see one CPU hit 100% and remain there, doing helluva lot ioctl() on /dev/dri/card0 per second. In Campaign mode, it seems even slower than in skirmish. Some missions are quite ok, though, with no problems at all. I do understand that this is not exactly high-end gaming configuration, but: - openarena 3D first-person shooter (arguably more dynamic) on the same system is *blazingly fast* all the time - UFO:AI 2.2.1 did not seem to exhibit such slowness at all (and 2.3 does not seem so much more advanced in graphics) is there some other options in config file that can be played with? Or someting in the code to change to make it usable on this configuration? Perhaps lower rate how often are the characters "breathing"? or at least making mouse movement normal? Anything? ===== Comments Ported from Sourceforge ===== ====== mcr2010 (2010-07-19 12:08:04) ====== There will be new graphic options in trunk version soon, where you can try to turn off more of the features, that currently are turned on by default, like normalmapping, flares, coronas & the material system (animations & materials). I hope this will help you playing this game. I do not know though if the new graphic menu will be merged back into 2.3 tough, that is not my decision... In the meantime you can try to set cvars via console, cvars that could bring change are r_bumpmap, r_flares, r_coronas fro example... ====== koboldx (2010-07-19 14:03:38) ====== all maps got medium performance but BIG CITY got a big FPS drop on my PC, almost unplayable, would be nice if a mapper look at this map again for some optimization. ====== mnalis (2010-07-25 12:43:58) ====== thanks mcr2010 for answer. Unfortunately, none of the r_bumpmap, r_flares, r_coronas seems to help at all. I've done more experimenting, so maybe it can pinpoint what the problem seems to be. I've tried changing resolution of the game, lowering it all the way to 640x350 - it doesn't make any difference at all. The scenarios gets exactly the same slow FPS on 1024x768 as it does on 640x350. So the performance does not seem to be fill-related. I've tried to put some manual timing debug in CL_Frame() (as gprof didn't seem to give me anything useful), and here are partial results so far (I'll delve deeper if it helps): When the frame rendering is slow, it might take for example 347ms on full view of "+industrial small" skirmish . Of it, 329ms goes to CL_ViewRender() and 15 to MN_Draw(). Persuing CL_ViewRender(), all of the time is spent in R_RenderFrame() which spends 204ms in for loop for 11 frames; and 123ms in R_DrawEntities(). As for the for loop, it spends all the time in R_DrawOpaqueSurfaces(), which in turn spends it in R_DrawSurfaces(), mostly on frames (counting from 0) 1-4 (21+86+65+14ms). On of the ideas I had on the speedup are as follows. I'd play with that but I lack the knowledge on OpenGL stuff. If someone more experienced wants to persue this, It would be great, if not, I could try to play with it if someone helps me a little. For example, is it possible in OpenGL to save the graphics state somewhere (everything that is to be rendered), and later restore it? Please excuse my probably wrong terminology below; as I said I'm not into 3D programming. As I see the flow of the game, every time CL_Frame() is called (which is up to 50 times per second if machine is powerful enough), it clears the screen completely and builds it from scratch, rendering it object by object. It would be more efficient if static stuff (terrain, houses, etc. Even the aliens/humans if the user sets some speedup cvar) were pre-rendered beforehand and saved somewhere - then (in user selection mode where most of the time spent in the hame happens) only thing that should be added to render scene on each CL_Frame() call is the new mouse position (which should be blazingly fast). Also viewport change (rotating or scrolling the camera view, and zomming in-out) should not actually require the rebuilding of characters if I understand how OpenGL works (which is very very little, admittedly :) That is full rendering should then be invoked only if needed (which is when alien/human moves or something is fired and so moves through the air etc). would something like that be possible (from the tehnical point of view) ? But even if every object needs to be cleared and reinstated of every camera move, still the moving of the mouse on static screen should not need to rerender whole scene (if graphic buffer save/restore is possible). I'd be grateful for your thoughts on this ====== mcr2010 (2010-07-27 10:32:11) ====== @mnalis: Did you already try to play around with the new settings available in the new advanced graphic options-menu, current trunk version ? (Try to turn off the material system...) You should know that the current OpenGL rendering code is in the process of being completely rewritten, upgraded & optimized, but this is done in a branch by our OpenGL wizard Arisian & will take a while until it is completed, but if you are under Linux OS you could try that branch also, maybe this helps with your framerate problems... That are the ideas I have, I cannot tell you anything about the C-code, because I am not really familiar with it, but thanks for the timing analysis offered by you here, it may be useful for devs working on that part of the code... Maybe you could attach your ufoconsole.log here also for further info ?! @koboldx: Here you can adress the mapper, who created 'BIG CITY' about the problems you have with his map: http://ufoai.ninex.info/forum/index.php?topic=4686.msg36848#msg36848 ====== mnalis (2010-07-31 22:29:24) ====== I've checked out trunk (r31098) and tried it. It does have more options to turn on/off in advanced video screen, but none of them to seem to help any (tried turning on/off one by one and restarting game: vertex buffers, texture normalmapping, shadows, flares, coronas, fog, weather effects, material system) -- zommed out skirmish map "+industrial small" (used as example as I know it is slow for me) gets (terrible) 1.5 (one and a half) frames per second. I'm attaching my fresh ufoconsole.log (after nuking ~/.ufoai/2.4-dev, and starting game with "./ufo +set r_programs 0" - so default options) Thanks for the hint about Arisian OpenGL rewriting, would that be "renderer_work" SVN branch? I'll try that out also. ====== mnalis (2010-07-31 22:30:42) ====== on fresh install (only "+set r_programs 0" modified to make it work on Intel G965) ====== mnalis (2010-08-22 15:25:19) ====== as for the renderer_work SVN branch, I didn't get it to work; it segfaulted on my AMD64 lenny. What I tried is 1) same resolution (1024x768) but lower bpp (24 => 16) - that helped - sometimes a lot, sometimes very little. 2) upgrading Debian Lenny (stable) to Debian Squeeze (testing), and whoa - it did help A LOT. So to document: Debian Lenny is running xorg 7.3, with GL 1.4 Mesa 7.0.4, on Intel 965G 4.1.3002. glxgears gives 1470fps on default size, or 263fps when maximized to full screen to 1024x768 (24bpp) Debian Squeeze when firstly installed on xorg 7.5 gave me GL 2.2 mesa 7.7.1 but software rasterizer! glxgears gives 724fps on default size, or 110fps when maximized to full screen to 1024x768 (24bpp) upon fixing DRI permissions for my user it was like this: Debian Squeeze with fixed permissions reports xorg 7.5, GL 2.1 mesa 7.7.1, Intel 965G GEM 20091221 glxgears on it gives 783fps default and 75fps when maximized to full screen to 1024x768 (24bpp) that sounded almost as bad as a software renderer (and even worse when maximized!), and at least 2-4 times worse then Debian Lenny! So I proceeded without much hope, but I was amazed with good results in UFO:AI -- even if some of the benchmarks turned out much lower values than Lenny. So I'll document here my findings (most of the test were run at least 3-4 times to make sure they weren't flukes, which makes them all the stranger!) "(max)" in fps indicates that is limited not to go further by cvar cl_maxfps by default. lenny@16bpp +africa map zoomed out = 40fps lenny@24bpp +africa map zoomed out = 8.3 fps squeeze@24bpp +africa map zoomed out = 50fps (max) lenny@16bpp +africa map zoomed out, timerefresh command = 99fps (but moves chunky!) lenny@24bpp +africa map zoomed out, timerefresh command = 84fps squeeze@24bpp +africa map zoomed out, timerefresh command = 74fps lenny@16bpp +africa map zoomed out, and console brought out = 9fps lenny@24bpp +africa map zoomed out, and console brought out = 6.2fps squeeze@24bpp +africa map zoomed out, and console brought out = 50fps (max) lenny@16bpp +industrial map zoomed out = 2.2fps lenny@24bpp +industrial map zoomed out = 2.7fps squeeze@24bpp +industrial map zoomed out = 25fps lenny@16bpp +industrial map zoomed out, timerefresh command = 7.5fps lenny@24bpp +industrial map zoomed out, timerefresh command = 12fps squeeze@24bpp +industrial map zoomed out, timerefresh command = 54fps lenny@16bpp +industrial map zoomed out, and console brought out = 2.2fps lenny@24bpp +industrial map zoomed out, and console brought out = 2.7fps squeeze@24bpp +industrial map zoomed out, and console brought out = 23fps lenny@16bpp +military_convoy map zoomed out = 4.7fps lenny@24bpp +military_convoy map zoomed out = 6.2fps squeeze@24bpp +military_convoy map zoomed out = 45fps lenny@16bpp +military_convoy map zoomed out, timerefresh command = 71fps lenny@24bpp +military_convoy map zoomed out, timerefresh command = 80fps squeeze@24bpp +military_convoy map zoomed out, timerefresh command = 71fps lenny@16bpp +military_convoy map zoomed out, and console brought out = ? (forgot to time it) lenny@24bpp +military_convoy map zoomed out, and console brought out = 5.2fps squeeze@24bpp +military_convoy map zoomed out, and console brought out = 37fps The net result is that upgrading to new Mesa/GL in lenny brings the game back into "very playable" domain on this hardware, and gets easily amazing 500%-1000% speed improvement to gameplay. Intersting artifact is that timerefresh (which just rotates the map) and glxgears do not show such improvements, and even show 10-50% WORSE results (and even more!) I can now on squeeze even turn on almost all quality checkboxes (but GLSL shaders which are now available and game no longer crashes on startup when enabled; but on my system tend to produce strange "colored static" making it unplayable) and put lightmap and texture resolution cap to normal (it won't go higher) and still have extremly playable game (mostly at 50fps which is default maximum limit anyway without manual thinkering of confing file, and if not that then at least in good-very good 25-40fps range) in most (all?) missions. only problem I've found is in squeeze gamma controls in game no longer work (nothing changes). I can still set them with xgamma(1) before starting the game so it's no big deal. For the fellow gamers with same problems: - try to lower DefaultDepth from 24 to 16bpp in /etc/X11/xorg.conf, or - upgrade from Lenny to Squeeze (it is frozen now so hopefully will go stable and be released in several months, so you may want to wait) If someone of the devs or people with more OpenGL understanding can see what is going on there, I hope those tests will help them see how to improve things further. I can still run some tests for next few weeks on lenny if someone needs them; after that I'll probably go squeeze-only. ====== aduke1 (2012-09-27 23:15:32.793000) ====== - **assigned_to**: MCR ====== aduke1 (2012-10-01 00:55:05.077000) ====== - **milestone**: 2.3 --> 2.3.x
Steps to reproduce this issue
Nothing entered.
Comments (0)
Issue basics
  • Type of issue
    Bug report
  • Category
  • Targetted for
    Not determined
  • Status
  • Priority
    3. Normal
User pain
  • Type of bug
    Not triaged
  • Likelihood
    Not triaged
  • Effect
    Not triaged
Affected by this issue (1)
People involved
Times and dates
  • Posted at
  • Last updated
  • Estimated time
    Not estimated
Issue details
  • Resolution
    Not determined
  • Reproducability
    Not determined
  • Severity
    Not determined
  • Complexity
    Not determined
  • Platform
    Not determined
  • Architecture
    Not determined
Attachments (0)
There is nothing attached to this issue
Duplicate issues (0)