I do not remember, but wasn't there some issue with MAC addresses on x1000 where they all the same? I sure remember something of that sort, just dunno if for x1000.
No idea about the X1000, but the X5000 has, or at least had, the same issue: In the default U-Boot env variables on the MicroSD card delivered with it the variables for the MAC addresses were set All systems using this default use those same 2 MAC addresses, probably the ones of a X5000 of an U-Boot developer. If the MAC address env variables are set (and saved) the values from the env variables are used. Only if it's not set/deleted the firmware reads it from the hardware, which is, or at least should be, different for each system, and stores it to the U-Boot MAC env variables again.
But it's only a problem if you have >= 2 X1000 or >= 2 X5000 in the same LAN
Using AOS4 version of maybe undocumented ~PA_CALL/~PA_FASTCALL instead of PA_SOFTINT for the timer replyport would work better.
I don't know what AROS ~PA_CALL/~PA_FASTCALL is, and what the difference between both is, but in AmigaOS 4.x it' the same as in AmigaOS 0.x-3.9, except that on AmigaOS 4.x it has to be a PPC native function and no emulated m68k code: A single, undocumented but implemented method.
I'm not sure anymore, but I think the value required for it is (the same as) PF_ACTION.
Are gfx drivers in AOS4 already loaded/running when S:startup-sequence is executed?
Of course they are, required for example to display the "Early Startup Menu" on the gfx card. But it's limited to a special "BootVGA" screen mode, IIRC 800x600. It's the same when booting without executing S:Startup-Sequence, from the early startup menu, or with some special keys.
Only after the OS is completey loaded, after starting the DEVS:Monitors drivers, etc., you have access to FHD, 4k, 8k, etc. (depending on gfx card and monitor hardware) screen modes.
There is also a "DRIVER_OVERVIEW.txt" explaining how it works in brief.
Also there is a "linux_ref" dir: it contains the Linux source code of the drivers for everything supported on the X1000 (including our network driver), and the necessary parts from the platform support, so you have no need to search for anything anywhere -- everything is there.
Please check this, maybe some of you will immediately find what is wrong, or will have some ideas, etc.
Thanks a lot!
ps. And thanks to Derffs for device skeleton code, that helps much!
EDIT: Check again latest version (the one on github) via my stress tool which doing that: open multiple TCP connections and recieve data as fast as possible (on the server side i send bunch of AAAAAA when connected by stress tool from x1000), so we force drirver's RX ring wraps many times per second. Now first run after boot pass 300 seconds run fine, then immediately i run it second time, and
[stress] =====================================================
[stress] PA6T-1682M RX ring-wrap stress tester
[stress] =====================================================
[stress] Server : 192.168.0.144:9999
[stress] Connections: 8
[stress] Duration : 300 seconds
[stress] Ring wrap : every 64 frames
[stress] Connection 0 open (fd=0)
[stress] Connection 1 open (fd=1)
[stress] Connection 2 open (fd=2)
[stress] Connection 3 open (fd=3)
[stress] Connection 4 open (fd=4)
[stress] Connection 5 open (fd=5)
[stress] Connection 6 open (fd=6)
[stress] Connection 7 open (fd=7)
[stress] 8/8 connections open. Starting receive...
Sometime,s i not need to even transfer big amount, i can just run it + ctrl, and repeat it few times and if i lucky (or not lucky) can lockup right after 5-6 ctrl+c. So it's not amount of the transfered data or the time, it just happens anytime.
@All Some bug-hunt progress: till today i do all tests on 100m/s cable. And so those lockups were for me not very easy and fast to reproduce, i had to run stress tool for some time usually (200-300 seconds, sometime more, sometime less), sometime just 10-20 runs/ctrl+c was enough, but most time i need to wait to reproduce it. Speed was ~13m/s, fully stressed to maximum.
Now, when i switch to cables which handle 1GB fine, i lockups IMMEDEATELY always when trying to run stress tests. Switched back to 100mb/s cable , and lockup happens not as fast, and take longer. Once again back 1GB - immediately.
Did it give us any clue ? Only that something overflows ?
EDIT: Also, checking the linux pasemi related platform code, found arch/powerpc/platforms/pasemi/setup.c, that they configures bunch of SoC-internal debug registers as part of their MCE handler setup. As we don't have datasheet it's unknown how many of debug info registers present in SOC , but what we know for sure, that in linux (this setup.c) they uncover 8 of them:
so made a diagnostic bit in the driver which on every irq entry read all 8 of these SoC debug registers plus the DMA operational status registers and write them into a ram which will survive reboot after lockup, so i can in CFE do "d 0xXXXXXX" to read what were last before die and result: all 8 debug registers completely zero before death. WTF !
Edited by kas1e on 2026/3/19 19:22:46 Edited by kas1e on 2026/3/19 19:23:31
kas1e wrote:@All Now, when i switch to cables which handle 1GB fine, i lockups IMMEDEATELY always when trying to run stress tests. Switched back to 100mb/s cable , and lockup happens not as fast, and take longer. Once again back 1GB - immediately.
Did it give us any clue ? Only that something overflows ?
Are you just changing the cable, or the physical ports? I.e. when you say you're switching the cable, do you mean you're switching between cat 5e, cat 6e etc., or changing from a 100 Mbit link partner to a Gigabit link partner?
BTW, have you tried doing a ping flood from another machine to the X1000?