Weird Graphics Problem
#11
Posted 02 August 2023 - 01:09 PM
Log attached. OpenRailsLogfreeze.txt (83.67K)
Number of downloads: 71
Mick Clarke
MEC
#12
Posted 02 August 2023 - 02:29 PM
2. Failing hardware, RAM, GPU/RAM?
3. To narrow down what caused this you might have to do a bit of a binary search through the various testing releases, to see where it fails.
4. I, have looked at your log file, there is a lot of stuff being loaded at start up. Any bug(s) in the timetable data?
5. I keep my viewing distance way down, ViewingDistance, to about 2500 meters, this might help.
6. Things should not be broken like this, without an explanation. Sometimes we have to dig deeper to find what developer pulled what source and mangled things. That kind of inquisition could/may get sticky. Be prepared for some friction.
7. Maybe the answer is simple. Too many render primitives, etc? Some OR registry setting got flipped?
8. It is too bad that "upgrades" break things, and keep on breaking things. I do not like that. Often I just stick with some version and do my testing/experiments from there. But you have been with this project for many years, so I think you would know that.
Steve
#13
Posted 02 August 2023 - 11:25 PM
cesarbl, on 02 August 2023 - 10:35 AM, said:
You're right, Cesar. Content is outside our control, so not everything can be tested before it's published. Inevitably our Open Rails project relies on prompt and specific feedback from our users.
cesarbl, on 02 August 2023 - 10:35 AM, said:
Please don't stop contributing improvements; our project is benefiting from your expertise.
#14
Posted 02 August 2023 - 11:32 PM
Eldorado.Railroad, on 02 August 2023 - 02:29 PM, said:
Good advice, Steve, but we don't keep all the Testing Versions (although this is changing). However we do keep all the Unstable Versions, so the first step would be to find the last Unstable Version which doesn't have the problem and the first one that does.
If you can do that, Rob, then we can work through the PRs to see which one made the difference.
#15
Posted 02 August 2023 - 11:47 PM
systema, on 02 August 2023 - 01:09 PM, said:
Log attached. OpenRailsLogfreeze.txt
Mick Clarke
MEC
The thread hang doesn't seem to be related to waiting points and reverse points. Looking at the log there are two hypotheses:
1) an unwanted loop in handling an animation of a steam locomotive
2) simply a temporary overload of the CPU.
#16
Posted 03 August 2023 - 12:49 AM
Quote
This is suspicious. I must have introduced a really naughty bug, but I'm yet unable to find it. I'm now inclined to think that NaN propagation is involved in the problem.
#17
Posted 03 August 2023 - 01:09 AM
Eldorado.Railroad, on 02 August 2023 - 02:29 PM, said:
2. Failing hardware, RAM, GPU/RAM?
3. To narrow down what caused this you might have to do a bit of a binary search through the various testing releases, to see where it fails.
4. I, have looked at your log file, there is a lot of stuff being loaded at start up. Any bug(s) in the timetable data?
5. I keep my viewing distance way down, ViewingDistance, to about 2500 meters, this might help.
6. Things should not be broken like this, without an explanation. Sometimes we have to dig deeper to find what developer pulled what source and mangled things. That kind of inquisition could/may get sticky. Be prepared for some friction.
7. Maybe the answer is simple. Too many render primitives, etc? Some OR registry setting got flipped?
8. It is too bad that "upgrades" break things, and keep on breaking things. I do not like that. Often I just stick with some version and do my testing/experiments from there. But you have been with this project for many years, so I think you would know that.
Steve
Re. points 1, 2, 4 and 5 : these can be discarded as both versions, old and new, are running now, in the same environment, using the same settings and data etc. So if any of these issues were the cause, both versions should be affected in the same manner.
Re. point 3 : see below
Re. points 6 and 8 : yes, I know - been there before.
Re. point 7 : it looks like that, but what has been changed to cause this?
cjakeman, on 02 August 2023 - 11:32 PM, said:
If you can do that, Rob, then we can work through the PRs to see which one made the difference.
Sadly, that's not possible. I have worked on the new update for months, it comprises over 2000 lines of code changes. Also, the full timetable has been adapted to the new version, adding new commands to a very large number of trains, which took me weeks to complete.
To insert all those patches in previous unstable versions would be an immense task. Given the time required to set up the test for a specific unstable version, start the timetable and run to the required time which properly shows the problem, it would take at least a day for each unstable version to test.
To revert the timetable changes so it could work with the unaltered code would also take quite some time.
Due to the problems I had with GIT (as I explained elsewhere), I have allready spend months on this update, not making any progress, but simply to get things sorted out so the changes could be committed and made available to others. I am not going to spend some more months on this to sort out problems which are not of my making.
I can commit the changes to the latest version and leave it at that. I have a proper working version with which I am happy. We can then forget about this whole issue and pretent it does not exist.
Sorry if this sounds a bit harsh, but it's not the first time I am making efforts to share my progress by committing my latest changes, only to run into all kinds of issues which I have nothing to do with, but which take up a lot of my time to sort out. I do find this all rather frustrating.
Regards,
Rob Roeterdink
#18
Posted 03 August 2023 - 01:36 AM
cjakeman, on 02 August 2023 - 11:32 PM, said:
If you can do that, Rob, then we can work through the PRs to see which one made the difference.
Although not announced yet, we have the last two years of Testing Versions available already. This was built for the new website. (We have the same setup for Unstable Versions going back to March, possibly further in the future.)
The Testing Version is by far the best option to find when a regression happened, since it can only move forward with new code merging, unlike the Unstable Version.
roeter, on 03 August 2023 - 01:09 AM, said:
To insert all those patches in previous unstable versions would be an immense task. Given the time required to set up the test for a specific unstable version, start the timetable and run to the required time which properly shows the problem, it would take at least a day for each unstable version to test.
You do not need to do anything with your code yet.
You (and anyone else experiencing an issue) should download and run the latest Testing Version, and then older Testing Versions, to see which ones have the issue.
No code changes, no development environment needed.
If you find the first Testing Version to have the issue, we'll have a much easier time of figure out the source and fix.
If you don't find any Testing Versions with the issue (especially if you wait a week), it is likely that your code and the rest of the code are interacting badly somewhere, which you'll have to figure out.
Note: Always start development from the "master" branch.
#19
Posted 03 August 2023 - 02:09 AM
James Ross, on 03 August 2023 - 01:36 AM, said:
No code changes, no development environment needed.
As I explained, I adapted the timetable to run with the update. Because of this, the timetable will no longer load with the old program versions. So I cannot run these tests without a lot of extra work.
Regards,
Rob Roeterdink
#20
Posted 03 August 2023 - 05:25 AM
roeter, on 03 August 2023 - 02:09 AM, said:
Does this graphic anomaly only appear then when you run the timetable to a certain point?
I wonder what you see if you place the train at that location and run an un-adapted Testing Version in Explore mode.