Elvas Tower: 10 bit color - Elvas Tower

Jump to content

Page 1 of 1
  • You cannot start a new topic
  • You cannot reply to this topic

10 bit color Rate Topic: -----

#1 User is offline   Genma Saotome 

  • Owner Emeritus and Admin
  • PipPipPipPipPipPipPipPipPipPipPipPipPip
  • Group: ET Admin
  • Posts: 15,341
  • Joined: 11-January 04
  • Gender:Male
  • Location:United States
  • Simulator:Open Rails
  • Country:

Posted 27 November 2021 - 09:14 PM

Hardware for 10 bit (HDR-10) color is slowly appearing in the market -- first on TV's but now creeping in on some monitors. My preferred art software has 48 bit RGB (is that 10 bit??) -- I normally use 32 bit and then save to .bmp for conversion to ,ace

All of this makes me wonder about 10 bit color and Open Rails. Can OR handle 10 bit color? Can any of our .ace programs handle 10 bit color (or does 10 bit mean only .dds?). Would 10 bit color actually make any sense (I suspect it might make it easier to represent the color of metals closer to accurate as that's basically impossible w/ ordinary RGB).

I have no clue to an answer for any of those questions... anyone know anything about this stuff?

#2 User is offline   Laci1959 

  • Foreman Of Engines
  • Group: Status: Contributing Member
  • Posts: 939
  • Joined: 01-March 15
  • Gender:Male
  • Simulator:Alföld
  • Country:

Posted 28 November 2021 - 09:23 AM

Hello.

The cab was made with CorelDraw. I took advantage of the metallic effects provided by the program. I made it sometime in 2008.

Sincerely, Laci 1959

https://kephost.net/p/2021/47/8661_e0eb1b223fe5.png

#3 User is offline   dajones 

  • Open Rails Developer
  • Group: Status: Contributing Member
  • Posts: 413
  • Joined: 27-February 08
  • Gender:Male
  • Location:Durango, CO
  • Country:

Posted 28 November 2021 - 11:56 AM

I don't know much about 10 bit color, but as far as I can tell the OR ACE file code only supports up to 8 bits per channel. The OR DDS code appears to support formats with 10 bits and 16 bits per channel. The shader code uses 4 byte floats for colors and in lighting calculations. So you can probably get 10 bits per channel out even if all of the source colors are 8 bits.

Doug

#4 User is offline   Serana 

  • Conductor
  • Group: Status: Contributing Member
  • Posts: 489
  • Joined: 21-February 13
  • Gender:Male
  • Location:St Cyr l'Ecole (France)
  • Simulator:Open Rails
  • Country:

Posted 01 December 2021 - 02:20 AM

 Genma Saotome, on 27 November 2021 - 09:14 PM, said:

My preferred art software has 48 bit RGB (is that 10 bit??)

That's 12 bit per color + 12 bit for alpha (transparency).

 dajones, on 28 November 2021 - 11:56 AM, said:

The shader code uses 4 byte floats for colors and in lighting calculations. So you can probably get 10 bits per channel out even if all of the source colors are 8 bits.

Nope, 4 byte means 3 bytes for color and 1 byte for transparency.

#5 User is offline   dajones 

  • Open Rails Developer
  • Group: Status: Contributing Member
  • Posts: 413
  • Joined: 27-February 08
  • Gender:Male
  • Location:Durango, CO
  • Country:

Posted 01 December 2021 - 05:20 AM

The shaders use a 4 byte float per channel, so a total of 16 bytes per color.

Doug

#6 User is offline   Serana 

  • Conductor
  • Group: Status: Contributing Member
  • Posts: 489
  • Joined: 21-February 13
  • Gender:Male
  • Location:St Cyr l'Ecole (France)
  • Simulator:Open Rails
  • Country:

Posted 01 December 2021 - 05:49 AM

 dajones, on 01 December 2021 - 05:20 AM, said:

The shaders use a 4 byte float per channel, so a total of 16 bytes per color.

Doug


Ok, I misunderstood because you were speaking about bytes... for floating-point values.

float4 means it is a vector of 4 floating-point values, each floating-point point value being 32 bits (so it is indeed 4 bytes per color).

But floating-point values have realistically no reachable maximum and minimum values. So... it's not really comparable to the numbers you can reach with the 8, 10 or 16-bit integer values of the original textures.

#7 User is offline   Genma Saotome 

  • Owner Emeritus and Admin
  • PipPipPipPipPipPipPipPipPipPipPipPipPip
  • Group: ET Admin
  • Posts: 15,341
  • Joined: 11-January 04
  • Gender:Male
  • Location:United States
  • Simulator:Open Rails
  • Country:

Posted 01 December 2021 - 01:37 PM

As a test I converted an existing file to 49 bit rgb and then saved it. The software offered a couple of proprietary formats and .tif so I chose the later. I then checked the properties of that file as well as previously created screenshot from OR. Per windows properties, the .tif bit resolution was 64, the screenshot was 24.

I don't know what MS has in mind with "bit resolution" but I'll hazard a guess it's what we are discussing in this thread.

This leads me to wonder about the entire work process -- source to game -- and where are the gaps, how long will it take to fill them, and for our purposes does it matter?

I do a lot of image manipulation and what I've learned is I can save deep color files into .tif. But as I do not have a true HD-10 monitor I won't be able to see what I'm creating at that color depth. Not being as artistically talented as say Beethoven was at music (i.e., deaf) leaves me to conclude deep color is probably beyond my grasp because true HD-10 monitors are far more expensive than I'd ever consider. Sure, I do do ordinary effects, things like add noise or apply a texture function but I'd never see the results (and at the price of HD 10 monitors, neither would anyone else), so why bother? 4k TVs? Yeah, I suppose, but to use that one would need to go from deep color .tif to OR. Is that feasible?

#8 User is offline   ErickC 

  • Superintendant
  • Group: Status: Elite Member
  • Posts: 1,001
  • Joined: 18-July 17
  • Gender:Male
  • Location:Hastings, MN, US
  • Simulator:ORTS
  • Country:

Posted 12 December 2021 - 06:01 PM

I imagine that the term "bit resolution" has to do with the fact that increasing the number of bits per channel simply offers more degrees of gradation between the extremes, analogous to how the bit depth of audio formats affects the degree of gradation insofar as volume is concerned. White will still be white and black will still be black (or the equivalents in each channel), but there will be more shades of grey in between.

I suspect that, just as with a finalized, mastered recording, this will be a case of diminishing returns. Consider that 44.1KHz 16 bit audio already exceeds the capability of analog audio (insofar as noise floor is concerned) and the human ear has a really difficult time perceiving the enhanced dynamic range of a 24-bit recording (as much as audiophiles who frequently fail double-blind tests like to claim otherwise). Like audio, I would wager that the main benefit will be for artists working with master files prior to the final export (especially since DXT compression has a limited palette per 4x4 pixel area). As an example, 48KHz 24 bit is the standard for recording and mastering because it moves quantization artifacts way out of audible range and has a much lower noise floor - both useful things when mixing together several audio streams. At the same time, 44.1Khz 16 bit is the standard for the final product because human hearing cannot perceive frequencies above 20KHz and the dynamic range of a 16-bit recording is pretty close to the limit of what you can perceive at normal volume levels (a notable exception would be classical recordings with a high dynamic range played at extreme volumes).

Here's an easy test - for those of you who have a monitor that can take advantage of that bit depth and image editors that can create that bit depth, create a 10-bit greyscale gradient from pure white to pure black over a nominal area. Compare it to an equivalent 8-bit gradient and see if you can perceive the difference. For even better results, set up some kind of double blind test where samples are viewed in randomized order and the person selecting the samples has no idea which samples are which. This has a caveat, though - if the actual dynamic range of the monitor is closer to the dynamic range of reality (e.g. pure black is closer to an absence of light and pure white causes eye damage), then the extra gradation will really matter!

The thing that's more important in the final product is the resolution of the image, just as the sample rate is the really important thing with audio. Digital media has suffered, to varying degrees, of having a resolution lower than what humans can perceive. For example, human hearing can detect frequencies up to 20KHz, but space considerations often meant low sample rates were used. The maximum frequency that an audio sample can sample is half the sample rate, which is why the 11KHz audio used in the MSTS days never sounded good. Over time, file space has become less of a concern and there's no reason to use a sample rate below 44KHz unless your application contains no high frequency data. Similarly, until recently, the resolution of monitors has been significantly below the resolution of the human eye. But 4K monitors have approximately reached this. Someday they may try to hype up monitors with a greater resolution than 4K... but that becomes like audio formats with sample rates greater then 44.1KHz. At that point you've already reached the limits of the resolution that the human body can perceive. As the average sort of everyday cheap tech gets better, they have to find a new way to sell you something.

Page 1 of 1
  • You cannot start a new topic
  • You cannot reply to this topic

1 User(s) are reading this topic
0 members, 1 guests, 0 anonymous users