The matter of having any engine build into ScummVM is just simple enabled them in the build line telling scummvm build process what engines will be build into, and that all.
Everything which does not support opengl/shaders/, simple enables and works, that all : Majestro should simple enable EVERYTHING, if i remember right scummvm build system have option for.
Them when it comes to opengl , when i made my own builds, there were issues with shaders which need fixing : some of them were go to main scummvm trunk, some of them no (if i remember correctly), but the main point , that without normal testing and knowing what you do, none opengl Engines will work : especially that one which handle Grim, Moneky Island and co.
@Majestro IMHO, as you only on the start point of learning how to make amigaos4 apps (And even with AI help you will need all this understanding) , maybe it worth right now to upload on os4depot Raziel's 2.9 version back , and then, there in that thread continue testing , and once it stable and tested, only then upload it to os4depot ?
Also, you need to know that every "big" project on amigaso4 need so called "stack cookie" inside of the binary itself (not an icon options or whatever), and it should be usually 2mb for good (Odyssey have 1 or 2mb for example, scummvm should have at least 2 too). That done with one single line putting somewhere in the source code (doesn't matter where), but better right before main() at top, like that:
Then, you should also be double sure that you enable ALL engines in scummvm , i do not remember the switch, but there was one for sure. Or, simple list the supported engines, and enable it command line build string yourself. But be sure that the one which use opengl need deep checking to make it all correctly : i did it before, just didn't upload over Raziel's work, and not sure if he did it too as well or not.
Also, you can enable all experimental engines too, that no problems as well.
And they more or less logical on the motherboard : first line 0 and 2 (channel0) , and second line on motherboard is ports 3 and 1 (channel1).
I knew it had to be something like that. I would prefer it follow a known standard. But even CFE matching the numbers would have been nice.
Quote:
It hits on Master and query everything from Master, if there no Master, then it simple idle timeout further.
Sounds like the annoying dots on screen when the A1/XE couldn't find a master. But I have had a drive disappear on my X1000 right after a reboot and CFE stop with no drives found. Don't know what it is but my X1000 can be fine for an hour or so, usually with Linux, then I reboot and suddenly get SATA errors on boot.
Quote:
Electra have U-Boot instead, dunno how buggy it was for them, and if it works for them at all at first and if they later switch to CFE with it too or not ..
They did. The plot thickens. Maybe they wanted the best of both worlds. There is UBoot support in CFE.
So Just noticed the following in the AmigaONE X1000 Firmware and Booting Guide Version 5:
-nousb = will disable USB while booting. Amigaboot.of does that automatically.
If Amigaboot cannot read or detect any USB devices, then how can it disable USB?
You know, AmigaOS 4.1 lags behind almost everything MorphOS already has. Congratulations—then the new engines and games shouldn't be a problem
Necronomicon: The Dawning of Darkness Crime Patrol Crime Patrol 2: Drug Wars The Last Bounty Hunter Mad Dog McCree Mad Dog II: The Lost Gold Space Pirates Who Shot Johnny Rock
Since Ultimate 8 is in high demand, I’m going to try to get this engine running next. I’ve since bought the game because no one is properly testing it or providing information about it. What no one seems to realize is that new versions sometimes bring a lot of changes, all of which need to be taken into account so that I can continue using the new version without any issues.
Right now, I don’t want to rush out new ports; it’s more important to me that everything works properly. I’m still dealing with some engines that aren’t fully functional, including Grim Fandango and other games like Syberia 1 & 2. I’ll think about it once those issues are resolved. But yes, I know Syberia 1 and 2 also run on MorphOS, just like the Wintermute Games
MacStudio ARM M1 Max Qemu//Pegasos2 AmigaOs4.1 FE / AmigaOne x5000/40 AmigaOs4.1 FE
I meant the stack cookie in binary itself. Smaller icon values have no impact.
SDL question is tricky considering also different C library variants for AmigaOS 4. Use what works for you. My pick is SDL3, it's cleaner and it gets new features. SDL2 is more or less in maintenance mode.
I'm planning on moving all my personal todo tracking to this app.
Few things (I've not had to chance to really use it yet so more to come)
1) do you have plans to support encrypted traffic to server? 2) can you test with ZitaFTP on OS 4? I tried it and it crashed but have not had a chance to debug. 3) any plans for an Android widget?
ScummVM should have a 4-megabyte stack cookie unless it was removed in your build. It sounds a bit overkill, don't know if anyone tried to measure the actual stack usage.
I also think 4 MB is a bit excessive, but I could increase it to 1 MB. Currently, ScummVM uses a stack size of 32768, and it’s stable, but for some engines, a larger stack size might be beneficial. Thanks for the suggestion.
Would switching from SDL2 to SDL3 make sense for ScummVM? In other words, do we currently see any benefits from using SDL3?
This is all WIP still, and its not like I have not used suggestions from Claude-AI, but this is how it currently works.
For push constants, the transpiler knows the struct layout from the SPIR-V reflection data, so it can emit correctly-typed uniforms on the GLSL side. The data goes through the GL uniform API, so the driver stack handles any byte-swapping the same way it would for any other GL program, so nothing special is needed on our end.
For UBOs the descriptor infrastructure's there, but right now everything gets flattened to push constants during transpilation. Partly because GLES2 doesn't have native UBOs, and partly because 128 bytes has been enough so far. Once shaders start reading from arbitrary offsets in a bulk buffer, the driver needs to know what type lives at each offset to get the byte order right. The plus side is that the information is all in the SPIR-V, so it's solvable without applications having to do anything extra.
I'm not sure of the reason and thought it may have been a hardware limitation. I actually wonder if those port assignments for port 0 and port 2 mean anything at all. They always looked confusing to me. Did they a mistake on the board sata labels and decide to "fix" it in the manual?
I do some test, and can say for sure that:
SATA Port 0 -> Channel 0 master SATA Port 2 -> Channel 0 slave SATA Port 1 -> Channel 1 master SATA Port 3 -> Channel 1 slave
And they more or less logical on the motherboard : first line 0 and 2 (channel0) , and second line on motherboard is ports 3 and 1 (channel1).
Quote:
So it takes the time to scan IDE but ignores the other SATA bus? Doesn't really make sense. But the X1000 was rigid compared to A1/XE and UBoot.
It hits on Master and query everything from Master, if there no Master, then it simple idle timeout further.
Quote:
So, whoever patched the Electra firmware it seems
Electra have U-Boot instead, dunno how buggy it was for them, and if it works for them at all at first and if they later switch to CFE with it too or not ..
I'm not sure of the reason and thought it may have been a hardware limitation. I actually wonder if those port assignments for port 0 and port 2 mean anything at all. They always looked confusing to me. Did they a mistake on the board sata labels and decide to "fix" it in the manual?
So it takes the time to scan IDE but ignores the other SATA bus? Doesn't really make sense. But the X1000 was rigid compared to A1/XE and UBoot.
I saw red flags when I read from the Broadcom CFE source that it was not designed for anything other than a MIPS CPU and should not run on anything else. So, whoever patched the Electra firmware it seems, the reference design for Nemo, not only went against that warning but took it further and hacked an OF backend into it for whatever reason. Presumably because Linux supports OF out of the box. Although UBoot would have been a cleaner choice.
So CFE is a big hack. Apparently bugs like running a binary that exits, which causes a crash is due to OF design, because it corrupts the runtime or similar and is designed to jump into client code and never come back. It's a wonder the CFE commands work at all without crashing. But somehow, when testing real CFE compiled binaries based on the actual CFE SDK, they didn't crash for me.
Anyway, for USB booting, regardless of how it happened in the past, I can only see one way of doing it now. Be it a good or bad news kind of way. It would need a usbboot.of binary chain loader that combines a loader with all the Kickstart files in one binary. Good news is that it could load from FFS, ISO or any supported filesystem in CFE. Bad news is it would directly load kernel in and would not give any boot menu to chose layout or any options. It would allow to compress the kernel however, even if using older GZip.
I thought @kas1e was just having hardware issues. I didn't show the rest of the tests because they were even stranger... Resized Image
I gave an example of the version I'm using, “QEMU GPU passthrough 11-rc1,” but it doesn't really matter because it works the same way on QEMU 9 and 10. In QEMU, there’s even a slowdown in GPU passthrough performance under PPC. The QEMU developers don’t have time to look into it and debug it. Of course, I don’t blame them—they have other, more important things to do.
As for the X1000, this system was considered the best-optimized Amiga NG. Maybe some Vulkan optimization, maybe some other issue.
Can you run the same test I provided and post the results? Maybe it will help someone figure out why it’s acting up for you and @kas1e.
Thanks!
Does your x86 qemu system configure the RadeonRX card to use PCIe v3 by any chance? That would give you a big bandwidth advantage over the A1-X1000, and its PCIe v1 slot.
Ah ok - it looks like the following was added after you looked at the Vulkan spec.
VkVertexInputAttributeDescription in the pipeline's vertex input state gives: - format (e.g. VK_FORMAT_R8G8B8A8_UNORM, VK_FORMAT_R16G16_SFLOAT, VK_FORMAT_R32G32B32_SFLOAT) - offset within the vertex buffer - binding number
That might have been in the draft too.
What about other data buffers, such as shader constants? IIRC, you could bulk upload data to the GPU, and then let the shaders read from anywhere within the buffer. In this situation, you only know the data layout when it's actually being used by a shader.
Being able to share pointers and data directly between CPU & GPU is really nice, from an API perspective, because it simplifies things (handling the different address spaces was a driver nightmare). From memory, Apple's Metal API has this thanks to a unified memory architecture where both the CPU and GPU use the same address space. Unfortunately, this is only possible if the CPU and GPU use the same endianness, and have MMUs set up to match each other. So the hardware needs to be designed for it.
@Capehill I ran another test. My standard benchmark result is 91 FPS. With Sashimi enabled in “shell” mode (outputting to the console), I get 23 FPS. With Sashimi enabled in “quiet” mode (no console output), I get 70 FPS.
Just running Sashimi to capture that large debug serial stream on my slow hardware eats up 20 FPS