No output from my RX580. I'll test it in my PC when I get a chance. The PC has an RX560 installed at the moment so I can test that out in the X1000 while I'm at it.
RX560 in, jumper set, running... so far no freezes (i suspect an external USB hub taking down the mouse and keyboard and the whole input on it's way...scrapped it and so far, so good)
The raw speed of this card just walked over my RadeonHD 7950 with ease...
------------------------------------------------------------
GfxBench2D 2.9 (27.1.2022)
A benchmark tool for graphics cards.
Written by Hans de Ruiter.
Copyright (C) 2011, by Hans de Ruiter, all rights reserved
------------------------------------------------------------
Random:
Time (s) Ops/s MPixel/s
15.536 5149.375 1732.151
------------------------------------------------------------
Result URL: http://hdrlab.org.nz/Benchmark/GfxBench2D/Result/2432
@kas1e
Speed is back in ScummVM, 30 FPS with Grim in-game (60 FPS capped in videos) Some other slow ones (like the Wintermute games) have also gained in performance, nice
When i don't start PowerPrefs and save it every time i boot WB, i lose about 10-12 FPS (tested real-time with shaderjoy), i still don't get why? Isn't the saved preferences load during boot time? (I know that at least the setting is picked up from ENVARC, since it's always set to "HIGH" on start, but it doesn't seem to get send to the driver unless PowerPrefs was started?) Sounds like a major bug in the Prefs program/driver handling to me.
Is there a way to automate the PowerPrefs load/save? I know there was this kind of program (or hack?) to do just that, start a program, save, close...but i'm not sure if i'm mixing that up with the requester automation...
I wrote me a stupid little script which can be placed in user-startup and does nothing else than running and closing Power(Prefs)...i think that is sufficient to power up the gfx board...
So it seems you got it running fine with your X1000+RX560.
Why do we need to do what you do with CFE? Does it wait for a supported gfx card to be ready to be able to proceed with the boot or what? And so we need to disable gfx output completely to make it ignore that step?
I don't seem to have any X1000 or CFE docs on my X1000 for some reason... have stuff about U-Boot on A1XE, but that's not helping I guess ;)
So basically what we need is to turn off gfx with some command sequence. Do we need the jumper, or is that just for recovery, to be able to restore settings? Can't these settings be changed from OS4 with some NV commands? I thought so.
Software developer for Amiga OS3 and OS4. Develops for OnyxSoft and the Amiga using E and C and occasionally C++
Correct. The GPIOLV10 jumper basically forces serial output and skips graphical output, thus skipping the need to init any gfx card in CFE (which is mandatory, because the RX cards aren't known by CFE).
No OS4, NV or CFE command will help with that, sorry.
I don't seem to have any X1000 or CFE docs on my X1000 for some reason.
Go to amisphere.com and login or create an account. In the download section you will find the following documentation for the X1000, including CFE docs.
That a little drawback for modern cards : they operational better in large tests, and worse in small ones. Through, small ones almost never used in real, and only in benchmarks. For "real" usage RadeonRX is with HIGH settings is much faster/better.
I will do some closer comparison to see if there is a particular test that is driving this.
If you compare the specifications, the R9 270x is a faster card across the board. Higher fill-rate, texel-rate, # of GFLOPS/s. You can see this reflected in the GfxBench2D results. Compare any of the individual benchmarks, and you'll see it come out top.
Actual performance is complicated, though. Maybe there's something in the Polaris architecture that helps it with 3D, or at least, the 3D games we have.
R9270X is in Relative performance 154% faster than RX550 but Rx580 is in Relative Performance 210% faster than R9270X the fastest R9 based card is R9 Fury X 116% faster than the RX580
so the R9 270x has roughly 2x everything compared to Rx550 Shaders, TMU's, ROPs and CU's and the 580 has roughly 2x everyting again compared to R9 270X and the L2 Cache probably counts for alot of performance drops... don't know if/how CU's are used in pure graphics relatede benchmarking, but TMU's should play a great deal of difference in 'modern' 3D Games. add to this memory handling, memory size, memry transfer speeds etc.
Soo. the R9 270X should be faster than many RX5xx Cards but it isn't the only thing is that it doesn't support FP16 (16-Bit Floating point) perhaps this means that it has to do 16-Bit Floats in software and that makes it slower.
@trgswe And you forget one small thing : we on AmigaOS with AmigaOS kernel, with AmigaOS itself, and with AmigaOS drivers. That mean, any part of this combo can play a role to reduce perfomance and making all those compare tests from windows/linux not relevant.
Thereoticaly, and "how they should" : yes, that should be like you say. But reality is AmigaOS and few ppls works on, mean that everything can be different.
Add to that that we on big-endian platform, meaning that everything need to be swapped out and take additional time/cost , meaning that everything again will be different.
Add also powermanagment, which may or may not works as expected. For example, why our PowerManagment didn't rise the resources at maximum when need it ? Because feel that not enough data send and this is not time to raise the resources ? But then why ? Where problems with ? and so on.
I wrote me a stupid little script which can be placed in user-startup and does nothing else than running and closing Power(Prefs)...i think that is sufficient to power up the gfx board...
Btw, i was told that "prefs not loading at boot" are expected, because nothing loads up power prefs on boot. So for make it works after each boot, you need to run it from user-startup in QUIET mode, just like this:
Wonder if someone could test what CPU usage they get from the Warp3DNova SDK examples? Personally I get 99% CPU use. I cannot recall if this was the same with a RadeonHD card.
Examples to try W3DNRenderToTex W3DNTextureCube W3DNBitMapCube