Elvas Tower: Graphics card resource gauge in the HUD? - Elvas Tower

Jump to content

  • 2 Pages +
  • 1
  • 2
  • You cannot start a new topic
  • You cannot reply to this topic

Graphics card resource gauge in the HUD? "Memorable" observations in v677 Rate Topic: -----

#1 User is offline   Eldorado.Railroad 

  • Foreman Of Engines
  • Group: Status: Contributing Member
  • Posts: 982
  • Joined: 31-May 10
  • Gender:Male
  • Country:

Posted 26 June 2011 - 03:04 PM

Perhaps I experiment a little more at the limits of OR than others. There have been quite a few occasions where I run routes that are chock full of content with polygons and textures stretching the limit of my 1 Gb graphics card.

Others have posted/boasted about the astronomical size of the new models that are being run in OR. But my own experiments have led me down the path of very poor performance and aborts for no clear reason other than the fact that creating long consists with completely different models and textures for each car/wagon can very rapidly lead to trouble in high detail routes.

At the moment, the HUD tells the user how many primitives are being drawn, but not what the total amount of video and texture memory is being used. There is no clear indication either of how much graphics card memory is being used as opposed to system memory being used. There has been some indication elsewhere (other OR forums) that OR might have a few memory leaks that have to be plugged. I use an ATI video card, and if I am to trust the onscreen results with the ATI TrayTool I can say that there are some occasions that the graphics card is running on empty with barely a few megabytes left of free video memory. Trouble will soon follow if the next world tile has more items that are not cached and that need to be displayed. Should OR not be able to tell the user the same information and maybe even warn the user that this condition will/can lead to trouble? There are occasions where OR has a seizure at 3-4 FPS and does not recover without killing the session.

In the same vein, would it not be useful if OR could pre-read all the world tiles and tell the user how many polygons are going to displayed and how much texture memory is going to be used? Perhaps a profile of all world tiles for a given route? This would eliminate the guesswork for the user. As is, I would have to use external utilities to guess how much trouble I am getting into for a given consist in any given route.

It also "seems" that even if idle/AI consists exist elsewhere on the route and are not being displayed they are already loaded into the video card memory to be displayed (This is what the ATI TrayTool is indicating to me). This means that high calorie consists should not be used even though they are not going to be displayed all the time, for a given activity. What else would account for very low video and texture memory readings in the ATI TrayTool?

I think I am not stretching the truth that v360 and v677 cache things differently. In v360 it seems less memory was being used and that purging happened from world tile to world tile. At that time I asked myself "why is more memory not being used"? In v677 it seems that things are not being purged at all. When really poor performance sets in, the only way to rescue the session is to return to the windowed display and hide that window in the taskbar. Once that happens the big purge comes with it and things get back to normal once you return to the full screen. All this happens with 2 GB of system memory. (Although not an elegant solution, could a memory purge key not serve the same purpose?)

The odd thing about some of these experiments is that with good LODs, MSTSBin behaves more predictably with high calorie consists/routes. Perhaps this will change in the not too distant future with OR.

In summation, good route/consist practices are just as important in OR as they are in other sims. There is no free lunch here, and you really need a lot (16-32Gb, not there yet!) of video memory with a very fast GPU(s) to display lots of detail at once. I foolishly thought that the woes of MSTSBin were behind me with OR, but really the user has to use his/her head with the same care when putting things into/on a route even if it appears that OR should be more capable of handling the load.

Eldorado

#2 User is online   James Ross 

  • Open Rails Developer
  • Group: Status: Elite Member
  • Posts: 5,491
  • Joined: 30-June 10
  • Gender:Not Telling
  • Simulator:Open Rails
  • Country:

Posted 26 June 2011 - 05:13 PM

View PostEldorado.Railroad, on 26 June 2011 - 03:04 PM, said:

There is no clear indication either of how much graphics card memory is being used as opposed to system memory being used.


Unfortunately, I don't believe this highly useful information is available to programs in a standard way (ATI and NVidia both have special APIs that programs like GPU-Z use).

View PostEldorado.Railroad, on 26 June 2011 - 03:04 PM, said:

In the same vein, would it not be useful if OR could pre-read all the world tiles and tell the user how many polygons are going to displayed and how much texture memory is going to be used?


It's not simple to calculate how much VRAM a given resource will take and there are plenty of types to consider - like vertex and index buffers - that aren't obviously linked to specific input data.

View PostEldorado.Railroad, on 26 June 2011 - 03:04 PM, said:

It also "seems" that even if idle/AI consists exist elsewhere on the route and are not being displayed they are already loaded into the video card memory to be displayed.


The models are not loaded when not in view, but some things (by accident mostly) are loaded. One interesting bug I found recently (but haven't fixed yet) is that the cab view textures are all loaded for every train! I also found that some VRAM-intensive resources that we'd intended to share across all trains were accidentally being duplicated for each and this chewed up a surprising amount of virtual memory and VRAM - but I fixed this one.

View PostEldorado.Railroad, on 26 June 2011 - 03:04 PM, said:

The odd thing about some of these experiments is that with good LODs, MSTSBin behaves more predictably with high calorie consists/routes. Perhaps this will change in the not too distant future with OR.


I am aware of some specific operations OR does that are terrible for VRAM usage and actually make using LODs worse than just having a single, high-detail model. :sign_thanks:

These and other issues will continue to be found and fixed, though; that's what software development is all about. :)

View PostEldorado.Railroad, on 26 June 2011 - 03:04 PM, said:

In summation, good route/consist practices are just as important in OR as they are in other sims. There is no free lunch here...


Of course, you can't just throw anything in and expect it to just work fine, but with OR we're constantly improving things. A long way to go though...

#3 User is offline   captain_bazza 

  • Chairman, Board of Directors
  • PipPipPipPipPipPipPipPipPipPipPip
  • Group: ET Admin
  • Posts: 13,927
  • Joined: 21-February 06
  • Gender:Male
  • Location:Way, way, way, South
  • Simulator:MSTS & OR
  • Country:

Posted 02 July 2011 - 06:55 PM

Quote

just having a single, high-detail model.


I've been using mostly single LOD models for ages and I've never had any reports to suggest this has been a problem in either MSTS, or OR, in recent years...hence I continue with the practice.

Mind you, I don't get many serious beta test reports, either. So, in lieu of anything detrimental reported, I continue to single LOD my models.

I do occasionally use LODs for more complex scenery models.

Cheers Bazza

#4 User is offline   captain_bazza 

  • Chairman, Board of Directors
  • PipPipPipPipPipPipPipPipPipPipPip
  • Group: ET Admin
  • Posts: 13,927
  • Joined: 21-February 06
  • Gender:Male
  • Location:Way, way, way, South
  • Simulator:MSTS & OR
  • Country:

Posted 02 July 2011 - 07:00 PM

Quote

Others have posted/boasted about the astronomical size of the new models that are being run in OR.


I don't think boasted is the right word....it's about communicating the facts for the sake of the sim's development. Somethings work fine, others need tweaking. Being able to extend the modeling envelope is what keeps this modeler interested in train simming.
:rotfl:
Cheers Bazza

#5 User is offline   Eldorado.Railroad 

  • Foreman Of Engines
  • Group: Status: Contributing Member
  • Posts: 982
  • Joined: 31-May 10
  • Gender:Male
  • Country:

Posted 06 July 2011 - 11:45 AM

View Postcaptain_bazza, on 02 July 2011 - 07:00 PM, said:

I don't think boasted is the right word....it's about communicating the facts for the sake of the sim's development. Somethings work fine, others need tweaking. Being able to extend the modeling envelope is what keeps this modeler interested in train simming.
:rotfl:
Cheers Bazza


Which is why in the same paragraph I wrote:

Quote

But my own experiments have led me down the path of very poor performance and aborts for no clear reason other than the fact that creating long consists with completely different models and textures for each car/wagon can very rapidly lead to trouble in high detail routes.


A single "mega-polygon" model might run just fine in OR (and I have tested 90,000 polygon models). But if I create a consist of different 90,000 polygon models (not just one or two) then OR and/or my hardware will not handle that! Even if I am more moderate in my experiments, say 50 wagon/car consist with 20,000 polygons per item I still run into problems.

Seems that the future of model creation for train sims will rely more on getting textures to render the illusion of reality as opposed to ever increasing numbers of polygons. Although OR can do self-shadowing quite well, with some tweaks and compromises, and not to mention performance penalties, models that have used baked global illumination in their textures allow the model creator to be more spartan with the polygons while still creating detail. This still allows the modeler to create the largest polygon model that your machine/imagination can handle during the creation process, but your final release will be a thinned down model with great textures that the sim will easily handle, even if there are many of those models being displayed at the same time. You may already be doing this. The flip side of this process is that those textures are often big (by 2001 standards) and the casual observer will conclude that texture memory is not being used very efficiently and most likely be right. Pick your poison.

All this might be moot in a few years while we are all yawning at the commonplace of 16-32 Gb video cards with multi yotta flop GPUs. I might not even be alive by then, or be able to pay the power bills to run it! Until such time I think it is prudent to yes, "push the envelope" but also recognize the limits of that envelope. At least, those are the limits I am trying to stay within for the moment to use ORTS! That was the reason for writing this article in the first place.

Eldorado

#6 User is offline   captain_bazza 

  • Chairman, Board of Directors
  • PipPipPipPipPipPipPipPipPipPipPip
  • Group: ET Admin
  • Posts: 13,927
  • Joined: 21-February 06
  • Gender:Male
  • Location:Way, way, way, South
  • Simulator:MSTS & OR
  • Country:

Posted 06 July 2011 - 06:32 PM

Quote

Seems that the future of model creation for train sims will rely more on getting textures to render the illusion of reality as opposed to ever increasing numbers of polygons. Although OR can do self-shadowing quite well, with some tweaks and compromises, and not to mention performance penalties, models that have used baked global illumination in their textures allow the model creator to be more spartan with the polygons while still creating detail. This still allows the modeler to create the largest polygon model that your machine/imagination can handle during the creation process, but your final release will be a thinned down model with great textures that the sim will easily handle, even if there are many of those models being displayed at the same time. You may already be doing this.

\
I'm a self-taught modeler, whose decade of modeling experience has been, until recently, aimed mainly at overcoming the limitations of our very old sim. Open Rails comes with new horizons and the chance to extend skills to take advantage of later graphics protocols. The mountain, like Jacob's ladder, just got higher, higher.

Therefore, I appreciate what you are saying and why you are saying it at this time. I agree with your erudite comments.

I would dearly love to develop more skills to produce even better models, with better graphics, for the good of the hobby.

Cheers Bazza

#7 User is offline   Genma Saotome 

  • Owner Emeritus and Admin
  • PipPipPipPipPipPipPipPipPipPipPipPipPip
  • Group: ET Admin
  • Posts: 15,350
  • Joined: 11-January 04
  • Gender:Male
  • Location:United States
  • Simulator:Open Rails
  • Country:

Posted 07 July 2011 - 02:43 PM

Quote

Seems that the future of model creation for train sims will rely more on getting textures to render the illusion of reality as opposed to ever increasing numbers of polygons.


But performance isn't just about polygons, uv vertices count just as much as poly vertices. Each material type you use is equivalent to a unique texture file. And every named element is for all practical workload purposes a unique mesh file. None of those things are invisible to the casual user and may not be well understood by many modelers, but their cumulative cost in performance is very real.

#8 Inactive_DAve Babb_*

  • Group: Status: Passengers (Obsolete)

Posted 09 July 2011 - 03:22 AM

View PostGenma Saotome, on 07 July 2011 - 02:43 PM, said:

But performance isn't just about polygons, uv vertices count just as much as poly vertices. Each material type you use is equivalent to a unique texture file. And every named element is for all practical workload purposes a unique mesh file. None of those things are invisible to the casual user and may not be well understood by many modelers, but their cumulative cost in performance is very real.


I suspect that for reasons of convenience and flexibility in my models, a lot of my multiple units from a technical point of view are not built in the most efficient way, although I'm still even while building OR spec models building to what I think is still a reasonable poly count, ie <15,000 per vehicle.

I do tend to build with lots of smaller textures.

Are we saying that each individually named part is bad ? As well as each texture and each material type ?
While there's somethings I'm not going to easily get rid of, it would help to know which of the practices were to be avoided if possible.

For instance, while there are lot of interior textures used, they are often only 128x128 and used on 2 sided 'walls', ie 8 polys in total, so while I accept it might require another draw process, it is not a complex shape.

#9 User is offline   Genma Saotome 

  • Owner Emeritus and Admin
  • PipPipPipPipPipPipPipPipPipPipPipPipPip
  • Group: ET Admin
  • Posts: 15,350
  • Joined: 11-January 04
  • Gender:Male
  • Location:United States
  • Simulator:Open Rails
  • Country:

Posted 09 July 2011 - 07:31 AM

View PostDAve Babb, on 09 July 2011 - 03:22 AM, said:

Are we saying that each individually named part is bad ? As well as each texture and each material type ?


Yes, there is a cost... whether it's bad or not is pretty subjective.

Run shapeviewer, open up a model, and look at the Model Hierarchy. See those brown boxes? Each box in Open Rails is called a Primitive and each one represents 1 unit of work for the CPU to pass a task over to the GPU and for the GPU to do something with the request. If you look at several wagons you'll note there are always at least 6 units of work for bogies + wheels (sometimes more) w/o regard to their sharing a common texture file and that's because they're named -- 4 wheels and 2 bogies. I happen to be looking at one that has two more names and so even tho they use the same texture and material type there are 2 brown boxes here and a third tied to Main, all using the same texture file. So that costs 3 only because the modeller named some polys instead of letting them be part of Main and now all us us get extra work for no good reason.

Some other examples... take a texture that covers a lot of the model and also has a small area that has some transparent pixels. Whatever CPU cost accrues to the number of texture files, that one texture costs as 2 because the work to handle the transparent area is not really different for the CPU than had it been it's own file. Take the same texture, apply some of it to normal material and some of it to Lo_Shine, that too costs the same as two texture files. So while you may have thought you did a good job squeezing your art into one texture file, because of it being applied to multiple material types and the inclusion of transparency, the work required to process it in this example is the same as-if you had 3 textures. Now if that's what it takes to make the model, you can't really call that bad... is just is what it is. But the cost is more than some other model that doesn't do those things.

So what to do?
  • Understand what Shapeviewer Hierarchy is telling you WRT processing costs -- the fewer brown boxes, the better. Seeing the same texture file lined up w/ several brown boxes should be your clue that there might be ways to lessen the workload of processing this model.
  • Keep named elements to a minimum.
  • Art that is applied the same way on multiple models should probably be cut out of a larger, custom texture file and put into its own texture file so it can be shared in VRAM -- bogie and wheel textures are an obvious example, so is glass. Things that are going to be processed uniquely by the CPU anyway should be thought about more carefully.
  • Placing areas of transparency into it's own texture file might be useful. The CPU is going to process that area uniquely anyway and getting it away from "normal" art means the later gets to use all of the bits to describe it's color when compressed, so it should look better.



Quote

I do tend to build with lots of smaller textures.


I do too... in part because I use Sketchup but mostly because the vast majority of what I model are very large buildings which simply look better by tiling art across large surfaces. That tiling required matching edges which automatically means unique texture files... it also means more UV vertices. It's not good for processing efficiency but given the huge surface areas I need to cover I feel I don't have any alternative. Smaller models, IMO, are where better techniques can be applied.

#10 Inactive_DAve Babb_*

  • Group: Status: Passengers (Obsolete)

Posted 09 July 2011 - 09:43 AM

Thanks for a very detailed repy.

I'm not sure how much I am going to be able to change things, (my current model has about 60 boxes shown in SFV) for a 13,000 poly car.

There's only a few options where I can see it being possible to remove them without a major complete change of building style which is somewhat concerning.

On the plus side no one is going to be running consists of 100 of these critters, more like a train of 3 !

  • 2 Pages +
  • 1
  • 2
  • You cannot start a new topic
  • You cannot reply to this topic

1 User(s) are reading this topic
0 members, 1 guests, 0 anonymous users