Login
Username:

Password:

Remember me



Lost Password?

Register now!

Sections

Who's Online
65 user(s) are online (35 user(s) are browsing Forums)

Members: 0
Guests: 65

more...

Support us!

Headlines

Forum Index


Board index » All Posts (Georg)




Re: Deleting complete pixel lines in ImageFx4?
Just popping in
Just popping in


Quote:

I already had a quick look on Gimp but did not find that type of command.


Maybe layer -> transformation -> offset (with activated wrap around) to move the lines in the middle of the picture to the outside. Then normal crop. Then reverse offset.

Go to top


Re: Catching memory corruption "in the act"
Just popping in
Just popping in


Quote:

I guess my only option is Linux? :(


If a PPC binary of the program exists: you can run 32bit or 64bit PPC big endian programs on little endian 64 bit x64 Linux with qemu user mode emulatation ("qemu-ppc" instead of "qemu-system-ppc"). You don't need to have PPC Linux installed, only download some Linux PPC iso and loop-mount it (and likely some filesytem image inside that as well) so you can tell qemu with "-L" where to pick up some PPC versions of libs the PPC binary needs.

I recently tried this with old (2011) version of 32 bit AROS PPC hosted on Linux version and it works pretty well. On old PC here (10 years old?) it "boots" into WB in 3 to 4 seconds (x86 AROS hosted debug version "boots" in maybe 0.5 seconds).

Go to top


Re: clib2 vs newlib perfomance issues
Just popping in
Just popping in


@kas1e

Quote:

#include 

int main(void) {
    for (
int i 01000000i++) {
        
printf("%3d %3d %3d\n"2551280);
    }
    return 
0;
}



What results do you get if you change printf line to

printf("255 128   0\n");


And what if you change it to:

puts("255 128   0");


And what if you redirect output to different place. Like some file in RAM: or to NIL:

Go to top


Re: NULL, 0xFFFFFFFF and Exec: Real vs QEMU
Just popping in
Just popping in


It is legal and documented for programs to pass 0xffffffff as windowtitle or screentitle and it means leave it unchanged. In case program wants to change only window title or only screen title.

I'm not sure what you mean with "code sends NULL" but you see 0xffffffff. Whatever params program passes to function you should see in patched function exactly the same. Exec is not involved at all.

Go to top


Re: QEMU GPU-PCIe AmigaONE
Just popping in
Just popping in


Is there pulseaudio being used? I think that thing may take control of alsa (I read somewhere). Like everything using alsa ends up going through pulseaudio (and only then to alsa kernel drivers).

Since ever I have been using Linux I do so by logging in as "root" (altough you shouldn't. or maybe "because you shouldn't ..."). After update of OpenSUSE Leap distro to newer version sound did no longer work when logging into desktop (KDE) as root. What I had to do was to start pulseaudio as a daemon by adding "pulseaudio -D" to some of the start/login scripts. I put it in "~/.profile".

Go to top


Re: QEMU GPU-PCIe AmigaONE
Just popping in
Just popping in


@balaton
Quote:
But we would need results from the same machine with the same benchmark on host, Linux guest and AmigaOS guest to be able to compare and I don't know who could get those results.


Btw, why not do some tests with AROS x86. If one downloads a live distribution like "AROS One 2.4" one can use it directly with

qemu-system-i386 -cdrom AROS-One-ISO-DVD-2.4.iso


and does not need to install anything and with it's vesa driver it shows you the framebuffer address (sys:system/hardware/pcitool) and it comes with gcc so you could directly type in some benchmark that pokes and peeks directly in VRAM and run it.

You could then see the difference between emulated cpu (qemu default? smiliar to cpu emulation of PPC hardware?) and hardware virtualiziation/cpu emulation (start qemu with -enable-kvm).

And I guess the AROS vesa driver should work with passed through radeon gfx card? So you can then compare those results with AOS4 with same passed through radeon gfx card.

In case this kind of tests is more difficult/annoying in Linux guest and/or Linux host ...

Go to top


Re: QEMU GPU-PCIe AmigaONE
Just popping in
Just popping in


@joerg

Quote:
Of course no sane OS, less than 30 years old, does anything like that.


Anything like what? You said other gfx systems like X11 avoid most vram accesses by using shadow framebuffer in dram. I said that most of the time this shadow framebuffer is not used in X11, only for things like unaccelerated vesa x11 gfx driver. Or if user specifially changes x11 config to disable acceleration (which then may cause the driver to defaulting to use a shadow framebuffer, or user may specifically add another option to force it to use a shadow framebuffer).

Regarding AOS or graphics.library. It's only a design choice if you allow the RTG system to have more or less freedom in the gfx driver interface. Whether gfx drivers can handle gfx card memory themselves (if they want to) or not to. And how you deal with fallback gfx functions for things that the driver does not implement (or accelerate) itself.

In AROS the gfx system (= graphics.library with hidd stuff) does not insist on having access to gfx card memory (vram or whatever). So the fallback functions are implemented with getimage/putimage (~readpixelarray, ~writepixelarray). And those themselves can fallback to getpixel/putpixel. Theoretically the fallback functions could still be written otherwise/more optimized, like by first trying direct access (LockBitmap), and if that fails, fall back to getimage/putimage method.

So in AROS hosted on Linux a "AOS screen bitmap" or friend bitmap of it can end up being a X11 window or X11 pixmap (could also be a GL texture if driver was written to work on top of OpenGL). And a graphics.library/RectFill() call can end up as xlib/XFillRectangle().

"Can", not "has to". Driver could also have been written completely differently. Like elsewhere (UAE). Just a chunky buffer in RAM.

Go to top


Re: QEMU GPU-PCIe AmigaONE
Just popping in
Just popping in


@balaton

Quote:

QEMU can also know the guest addresses but all the "pass-through-magic" is really just calling Linux to pass through the BAR addresses to set up the IOMMU to map the card's resources to the guest's address space so there's not much QEMU does with it.


Yes, but who says that it's not the activation/setup/usage of the IOMMU that introduces slow down when this addresses are then accessed.

Quote:

QEMU does not map the card itself so it can't really do a benchmark without breaking the guest ...


Guest doesn't matter. What I meant is something like this.

#include 

int benchmark(void *pint size)
{
    
printf("Benchmark addr %p size %d\n"psize);
}

int main(void)
{
   for(;;)
   {
   }
}


You put benchmark function somewhere in (qemu) sources. Run it with "gdb". Use CTRL-Z to break into the debugger and in the debugger do "call benchmark(0x12345678, 10000)".

Go to top


Re: QEMU GPU-PCIe AmigaONE
Just popping in
Just popping in


@balaton

Quote:

I think we would need better understanding of what causes the slowness first before trying to improve QEMU to solve it. It may not even be just slow VRAM access as some results were faster so there's at least one other factor somewhere.


If the emulated AOS4 has access to passed through gfx card VRAM, so does qemu. I would try to hack a little VRAM benchmark into qemu itself if you know how to find out the real address to use for this (it's not going to be the same VRAM address as seen in the emulated AOS4, is it?)

To see what theoretical max speed is. Maybe pass-trough-magic itself slows things down.

Go to top


Re: QEMU GPU-PCIe AmigaONE
Just popping in
Just popping in


Quote:
joerg wrote:@kas1e
Other gfx systems, for example X11, avoid most VRAM accesses using a shadow frame buffer in DRAM.

I think that's almost never so. That should be the case only if the X11 driver is not accelerated (like vesa) or driver option is added to xorg config file to disable acceleration. Also this special "modesetting" driver seems to default to use "glamor" as acceleration (implement X11 functions using GL), so no shadow buffer by default.

Having 3d accelerated gfx (even gui libs use GL nowadays) with shadow frame buffer in RAM: how would you do that (fast)?
Quote:

AmigaOS doesn't support anything like that.

Thomas Richter has done some P96 gfx drivers for AOS 3.x which do it - I think sometimes with MMU tricks -, but it would be better if there would not be this (P96) gfx system limitation, that the gfx system itself insistes on being allowed to have direct access to VRAM. There should be an option for drivers to allow them to handle all themselves and the gfx system then would interact with VRAM only through driver calls (like driver->readpixels, driver->writepixels for fallback gfx functions that the driver does not "accelerate").

Go to top


Re: Catching memory corruption "in the act"
Just popping in
Just popping in


If it's originally from another OS I'd try to debug it there. If it does not seem to show up there it may be by chance, so try running through some debugging tools (valgrind?). Or if it's caused by some endianess bug, if possible try to run it in big endian Linux.

Also with gdb on this other OSes you can use watchpoints to catch memory writes to specific addresses.

Go to top


Re: qemu 200% host CPU usage at idle?
Just popping in
Just popping in


@joergQuote:
joerg wrote:@balaton
The vcpu can only be stopped by QEmu itself, for example when accessing the emulated TimeBase TBU/TBL registers in the AmigaOS MicroDelay() function.


I don't think there's any stop during such things. I think QEmu just generates code that jumps to what they call "helper" functions.

Go to top


Re: qemu 200% host CPU usage at idle?
Just popping in
Just popping in


Quote:
Hans wrote:
I just tried the CPUTemp docky on os4depot, and the idle.task is still eating up all free CPU time. Looks like there's no easy way to disable it.


It may be that AOS4 requires an idle task (one that is always TS_READY), if it otherwise doesn't know how to handle the case of no-ready-task-to-run. Unlike AOS3 which handles it in exec/Dispatch() (loop until there is one with "stop" instruction inside the loop to sleep until interrupt happens which can cause some task to become ready).

If not, "Disable(), Remove(FindTask("idle_task_name")), Enable()" should disable it (without really killing it).

Or write your own idle task which priority 1 highter than the system one. And in your idle task have a loop which inside calls the whatever PPC instruction(s) which cause the cpu to go to sleep on qemu (maybe the emulated cpu handles such instructions even for cpus that in real life don't have them).

Go to top


Re: qemu 200% host CPU usage at idle?
Just popping in
Just popping in


@LiveForIt

If the guest OS does something (like "stop" on 68k, or "hlt" on x86) which lets a real cpu know that it is supposed to sleep (until interrupt happens), then an emulation of that cpu can know, too.

Here MorphOS in qemu (-machine mac99) shows about 8 .. 9 % qemu cpu usage on host with htop.

AROS Sam PPC in qemu (-machine sam460ex) shows about 5 .. 6 % qemu cpu usage on host with htop.

(old version of qemu 6.1.94)

AROS Sam PPC seems to be doing this to go idle:

wrmsr(rdmsr() | MSR_POW MSR_EE);
__asm__ __volatile__("sync; isync;");
__asm__ __volatile__("wrteei 0");

Go to top


Re: Qemu + VFIO GPU RadeonRX 550 + AmigaOS4 extremely slow
Just popping in
Just popping in


Is the slowness still there if you pass through slow gfx card without actually using it in AOS. Maybe you need to move it somewhere from it's directory "(DEVS:Monitors"?)to prevent it from being loaded.

If not, what if you then start gfx driver manually (double click driver icon?).

What, if you then change wb screenmode to use that gfx card.

What, if you then change back screenmode to something (not that gfx card) else again.

Go to top


Re: A1222eth vs. p1022eth driver
Just popping in
Just popping in


Disassembly looks like it may be operations on some Exec List. Second one maybe AddTail(). So it could be missing list protection (disable, sem, mutex, whatever). Or other list errors (double add, remove node which is not in list, node freed but still in list, ...)

Go to top


Re: Qemu + VFIO GPU RadeonRX 550 + AmigaOS4 extremely slow
Just popping in
Just popping in


@HansQuote:
Hans wrote:@nikitas
[quote]
I'm shocked that the graphics card had any impact on MicroDelay(), because the graphics card is NOT involved in that function.


It's not known if it's MicroDelay() specifically or if "with that gfx card" any other task which does something (heavy, like some other kind of benchmark, or calculation, or even just a compilation of some code) would see the same slowdown.

Go to top


Re: Qemu + VFIO GPU RadeonRX 550 + AmigaOS4 extremely slow
Just popping in
Just popping in


If MicroDelay shows more or less expected results with one gfx card, but not another gfx card (with otherwise same config) then it's more likely that problem is not MicroDelay, but something else. Like maybe tons of interrupts happening with one gfx card, but not the other?

I would try repeating the test with slow gfx card, but test loop changed to be surrounded by Disable()/Enable() (if that makes it fast, try Forbid()/Permit()). If microdelay is just a busy loop - which is likely - it should still work in disabled state. You might have to use a watch and check time it takes yourself, as AOS timer.device may behave wrong (long disabled state, timer register overflows, whatever).

Go to top


Re: Qemu + VFIO GPU RadeonRX 550 + AmigaOS4 extremely slow
Just popping in
Just popping in


@balatonQuote:
balaton wrote:@Georg
To help testing, could you please share your Linux kernel options and xorg.config to show how to set up vesafb and the x11perf command again so others can reproduce that test without having to find out the right config?


Could be wrong, but I don't think the x11 "vesa" driver needs any special Linux kernel options. There's another X11 driver "fbdev" which does use that Linux kernel framebuffer stuff.

In theory to use "vesa" driver it's just a matter of editing xorg.conf (in /etc/X11) (or save a modified version whereever you want) and look in the "Device" section in there and edit it to say:

Driver "vesa"
Option "ShadowFB" "0"

Many years ago that was enough. But nowadays if you try to start X11 (startx -- -xf86config myxorg.conf) it may fail and the log (var/log/Xorg.0.log) says "vesa: Ignoring device with a bound kernel driver". That seems to be because of the still existing normal gfx card (in my case "nvidia") kernel modules in memory.

So here what I do is to first log out of desktop, use CTRL ALT F1 to switch to virtual console, run "init 3" to get rid of X11 (KDE) display manager, then "lsmod | grep nvidia", then "rmmod" the modules (you need to find the right order, ie. which ones to remove first, otherwise it says "module is in use by ...") and then "startx -- -xf8config myxorg.conf". For some reason here the screen first appears somewhat broken (don't know if it's just the monitor), ~zoomed, ~like_wrong_modulo, so I also have to do some CTRL ALT F1 -> CTRL ALT F7 forth and back switching and then it displays fine.

If the thing is slow and you see flickering mouse sprite (because of disabled shadow framebuffer) in front of gfx updates (like "glxgears" window) it worked.

Google how to disable "compositing" on your desktop. There may be some shortcut key for it. To verify that it's disabled run "xcalc" or "xclock" from a terminal. Press CTRL+Z to freeze the program. Then drag it's window out of screen and back in. If this creates gfx trash or gfx disappering (like text/numbers) then it worked. (Happens because program is frozen and cannot update/refresh areas of window which became hidden and then visible again. With enabled compositor this does not happen, because the windows contents are backed up in their own pixmaps=bitmaps and the contents don't get lost when dragged out of view or behind things).

x11perf -shmput500
x11perf -shmget500

It's unlikely that it is not running in 4 byte per pixel screenmode (so that you can interpret x11perf results/sec as million_bytes/sec) but if you want to check then look if "xdpyinfo" says "32" for "bitmap unit". Tough I'm not 100 % sure that really reflects the "bytes per pixel". (don't know or remember why but AROS hosted X11 driver even creates a dummy test XImage and then picks the bytes per pixel from it).

Go to top


Re: Qemu + VFIO GPU RadeonRX 550 + AmigaOS4 extremely slow
Just popping in
Just popping in


@HansQuote:
Hans wrote:@nikitas

That certainly would cause some slowdown, although it cannot explain the difference between geennaam & nikitas' results.


You mentioned MicroDelay() and it could be caused by it as it will be some kind of busy loop checking some powerpc timer register. If qemu emulation of it is not very precise (may depend on host or even host (kernel) configuration = there may be difference between running Linux distribution A vs distribution B) then this will slow things down as it will cause the delay to last (possibly much) longer than expected.

Could be tested with a little AOS4 program which for example calls MicroDelay(10) 100000 times in a loop. Should complete in 1 second. If it takes (much) longer -> problem.

Go to top



TopTop
(1) 2 3 4 ... 6 »



Polls
Running AmigaOS 4 on?
AmigaOne SE/XE or microA1 12% (26)
Pegasos2 3% (8)
X5000 22% (48)
X1000 14% (30)
A1222 8% (19)
Sam 440/460 18% (40)
Classic PowerPC Amiga 2% (6)
WinUAE emulation 7% (16)
Qemu emulation 9% (21)
Total Votes: 214
The poll closed at 2025/12/1 12:00
6 Comments


Powered by XOOPS 2.0 © 2001-2024 The XOOPS Project