Headlines |
-
seq.lha - audio/misc
May 29, 2023
-
supermario64_gl4es.lha - game/platform
May 28, 2023
-
supermario64_mgl.lha - game/platform
May 28, 2023
-
amiarcadia.lha - emulation/gamesystem
May 26, 2023
-
smb2fs.lha - network/samba
May 26, 2023
-
cpubench.lha - utility/benchmark
May 26, 2023
-
sploinergui.lha - utility/workbench
May 26, 2023
-
amigaamp3.lha - audio/play
May 23, 2023
-
warpdtprefs.lha - datatype/image
May 22, 2023
-
hwp_apng.lha - library/hollywood
May 22, 2023
|
|
|
|
Re: The ADTOOLS thread
|
Posted on: 4/11 18:26
#61
|
Just can't stay away 
|
@rjd324 Quote: In both the STATIC and DYNAMIC versions, I notice that once the MAIN function ends then no printing from STDOUT is respected, despite indirection. In other words, any printing to STDOUT from the destructor is not written to the file via indirection. So, I would like to know why redirection does not work in both of these cases. After main() ends _exit() is called which closes/frees all C library stuff, including the stdio streams. Even if interfaces and libraries are released/closed as well IDOS should still be available and working (IIRC I didn't set the interface pointers to NULL), use something like IDOS->Printf() instead for debugging output. But in case that doesn't work either the only way would be using IExec->DebugPrintF(). Edit: OTOH C library functions of course have to work in sobjs destructor functions, I can't check it since I don't have access to the newlib.library (nor any other AmigaOS) sources any more, but maybe the order of first calling __shlib_call_destructors() and then _exit() is wrong in newlib.
Edited by joerg on 2023/4/11 19:21:55
|
|
|
|
Re: Lsof AmigaDOS?
|
Posted on: 4/10 19:36
#62
|
Just can't stay away 
|
@Capehill Quote: AROS on Linux seems to come with a task monitor. Not sure how portable it is (didn't check yet). The main development goal of AROS seems to be being intentionally as incompatible as possible to anything, and that's not just limited to AmigaOS 3.x/m68k and AmigaOS 4.x/PPC, but even to other AROS versions. Not only incompatible to big-endian native AROS versions like m68k, PPC or efika in the little-endian versions, but AFAIK even native AROS x64 versions can't use any code nor data from native AROS x86 versions. Hosted versions like the Linux one even more so, AFAIK the AROS tasks are simply host tasks in such hosted versions (for example Linux tasks), i.e. there is 0% compatibility to any AmigaOS exec task scheduler and a Linux hosted AROS task monitor is probably just a Linux task monitor (maybe filtering out non-AROS tasks) and there is nothing at all which could be ported to neither AmigaOS nor native AROS versions. The big advantage of hosted AROS versions is that they don't have to implement any hardware drivers, they can simply use the ones of the host OS instead.
|
|
|
|
Re: Lsof AmigaDOS?
|
|
Just can't stay away 
|
@Hypex Quote: But what I meant was a command or tool built in. It's becoming common that I find or rather don't find what I thought would be an obvious tool built into the OS. It's likely using what other OS has and then I realise something so obvious like even a basic process monitor doesn't come with OS4. Most of what an Unix top does is impossible on AmigaOS: - Each Unix task has it's own virtual address space and it's always know which, and how much, memory is allocated/used by it. On AmigaOS 99% of the memory is shared between all tasks instead and even the OS doesn't know which task owns/uses how much and which parts of the memory. (There are some exceptions in AmigaOS 4.x like MEMF_PRIVATE and the extended memory system, but such new parts of the OS are rarely used.) - Unlike the old AmigaOS 1.x-3.x/m68k Exec task scheduler the AmigaOS 4.1 ExecNG/PPC one does track the CPU usage of each task, but only on CPUs with the required hardware support, which is used for example by performancemonitor.resurce as well. A system legal way, if you have access to the internal ExecNG includes, is possible - but it can't work on all systems/CPUs supported by AmigaOS 4.x, only the ones with CPU support for performancemonitor.resource. As a result all what's possible for tools which can be used on all systems running AmigaOS 4.x are hacks, like my top and Capehill's tequila. On AmigaOS 3.x similar tools like Scout, SysMon, etc., which could display some system internals, incl. CPU usage per task, did exist as well, and even extreme hacks like Executive which reimplemented and (nearly) completely replaced the m68k exec task scheduler where available. But they had a lot of problems, more than tools like top and tequilla on AmigaOS 4.x, especially Executive broke much more software than it could improve.
|
|
|
|
Re: Redirect STDERR and STDOUT to the same file
|
|
Just can't stay away 
|
@rjd324 Redirecting stdin and stdout to the same file is possible and often used ("run <>NIL: foobar"), should be possible with stdout and sdterr as well, and with all 3. Try something like "foobar *>>out", but that might only redirect stderr but in appending mode like "foobar >>out" does for stdout, or "foobar >*>out".
|
|
|
|
Re: Lsof AmigaDOS?
|
|
Just can't stay away 
|
@Hypex Quote: We don't even have a top command. Actually we have at least 2 of them.However, my "top" can't be used in DOS or ARexx scripts since it running infinitely until you stop it with Ctrl-C, but maybe Capehill's "Tequila" can be used that way.
|
|
|
|
Re: NVMe device driver
|
|
Just can't stay away 
|
@geennaam Quote: Since defragmentation tools like sfsDefrag must be aware of the LBA address map of the filesystem, it could be used as basis for a manual Trim command. SFSDefrag, PartitionWizard, etc. don't know anything about the SFS LBA mappings. FFS defragmenting tools like ReOrg, my PartitionWizard, DiskMonTools and DiskOptimizer, as well as any other FFS defragmenting tools, do. The difference between FFS and SFS is that in case of FFS the external defragmenting tool has to do all the defragmenting work itself, for SFS it's just something like using internal "start defragmenting", "report current progress" and "stop defragmenting" commands. The actual defragmenting work is done internally by SFS itself. FFS defragmenting tools have to stop the file system (IDOS->Inhibit()), SFS defragmenting can be done on a live partition while it's used by other software at the same time instead. But neither FFS nor SFS are relevant any more, everyone should use NGFS instead.
|
|
|
|
Re: NVMe device driver
|
|
Just can't stay away 
|
@geennaam SFS doesn't, and never will, support something like TRIM, or any other SSD support.
Main problems: 1. SFS is an about 30 years old file system which was implemented for HD partitions up to about 100 MB (not GB). I removed the 128 GB partition size limit of SFS\0 in SFS\2, but that probably was a bad idea. SFS doesn't scale well for lager partitions, even FFS is better in that respect. 2. While the trackdisk API supports HD_SCSICmnd for extended features of SCSI and ATAPI hardware, there is nothing like a HD_ATACmd, which would have been required at least 20 years ago already, for example for the S.M.A.R.T. support. Instead sg2's PATA/SATA/SCSI drivers and his smartctl tool used undocumented, internal functions of his drivers instead.
I'd suggest to work together with tonyw to improve the ancient trackdisk API and add whatever is required for SATA and NVMe, and using such improvements in his NGFS.
|
|
|
|
Re: NVMe device driver
|
|
Just can't stay away 
|
@Raziel For the SSD firmware, for moving around (unused) sectors, yes. For the SFS file system: No. For NGFS: I don't know, but tonyw can answer that.
|
|
|
|
Re: NVMe device driver
|
|
Just can't stay away 
|
@Ami603 Quote: Would leaving some unpartitioned space be enough? No, unless all of your partitions on the SSD are nearly 100% full. The firmware can't know if some of the space on the SSD is not partitioned at all, or if it's some unused space on an existing partition. (Some SSD firmware might include monitoring support for bffs, ext2fs, NTFS, FAT, etc. file systems, but of course not for any partitions using an AmigaOS file system.) Edit: SFS gets extremely slow on nearly full partitions, no matter if it's on a HDD or a SSD. Maybe NGFS doesn't have such problems.
|
|
|
|
Re: microAmiga1 and USB 2.0. Is it possible?
|
|
Just can't stay away 
|
Requiring USB 2.0 for a keyboard is strange. Maybe it only needs the increased USB 2.0 power supply (0,5A/2,5W, USB 1.x was 0.1A/0,5W), for example if it has something like lit keys? In that case using an externally powered USB hub may work.
|
|
|
|
Re: NVMe device driver
|
|
Just can't stay away 
|
@geennaam I don't remember putting any transfer size limits in neither SFS nor diskcache.library, but maybe there is still something left in the original SFS sources I overlooked.
The BUF|BUFFER argument of C:Copy, using a multiple of the file system block size, i.e. "C:Copy BUFFER 2048 ..." is 1MB when using it on partitions with 512 bytes/block, is the size of reads and writes it's using. Old AmigaOS versions used only 200 as default, 100KB with 512 bytes/block. Since you are getting 16MB parts it's very likely that the current default of C:Copy BUF|BUFFER is 32768 (*512 bytes/block = 16 MB). With 16 MB reads/writes it's OK for comparing SATA with MVMe, but with the old 100 KB reads/writes it wouldn't have been.
RAM: isn't just ram, it has a file system/handler in between. Even with RAM: using very small reads/writes is much slower than using large ones.
Edit: C:Copy BUFFER default is probably 16364 and not 32768, C:Copy should use the least common multiple of the source and destination block sizes, and RAM: has (or at least had 10-15 years ago) a "blocksize" of 1KB. To make sure C:Copy results from different users are comparable the C:Copy version should be included. There should be no big difference between different C:Copy versions of AmigaOS 4.x, but the Enhancer Software C:Copy is something completely different, even if only the default for the BUFFER argument is different it can make a very big difference in the results.
Edited by joerg on 2023/4/4 17:33:55
|
|
|
|
Re: NVMe device driver
|
|
Just can't stay away 
|
@sailor SFS (original versions from John) included an own "diskspeed" implemetation from him. Obviously that "diskspeed" resulted in even more differences between FFS (extremely slow) and SFS (much faster than in normal usage) ... Don't trust any statistics you didn't fake yourself  SCSISpeed is a benchmarking tool from 1989(!). While the OS4 port of it still (Edit:) can be used for comparing current drivers and hardware, that's only the case if you use the BUF1-BUF4 arguments to use much lager buffers/transfer sizes than the ones which were OK for 35 years old drivers and SCSI hardware. @Raziel Using SCSISpeed or C:Copy is still OK, just not with the very old, tiny default buffer sizes but using much larger BUF1-4 (SCSISpeed) or BUF|BUFFER (C:Copy) arguments suitable for current drivers and hardware instead.
Edited by joerg on 2023/4/4 20:51:24
|
|
|
|
Re: NVMe device driver
|
|
Just can't stay away 
|
@sailor DiskSpeed results: Completely useless, especially when using SFS partitions with diskcache.library. In that case it's just the IExec->CopyMemQuick() speed, minus dos.library, file system, etc. overhead. DiskSpeed is a bench marking tool to compare different file systems using the same driver/hardware, not for comparing different drivers/hardware.
SCSISpeed results: Useless if you are using the tiny default buffer/transfer sizes of it. Using something like "scsispeed BUF1=65536 BUF2=262144 BUF3=1048576 BUF4=16777216 ..." instead should generate more usable results for comparing MVMe with SATA.
C:Copy results: I don't know what the current default for it's BUF|BUFFER argument is (used to be 200 in old AmigaOS versions, thas's only 100KB on partitions with 512 bytes/block), but if it's still less than 32768 (16 MB): Useless as well...
|
|
|
|
Re: NVMe device driver
|
|
Just can't stay away 
|
@TearsOfMe Quote: Load Kickstart from ssd and then load the Workbench from the nvme does not work here. The partition shows in the boot menu and had a higher priority but does not boot from it. Only if you select it yourself it boots the Workbench from the NVMe. Testet with FFS and SFS. Strange. Unless nvme.device includes own RDB parsing and partition mounting support, like the X1000, X5000 and Sam460 SATA drivers seem to do, you have to add nvme.device to diskboot.config - if you are either using a very old AmigaOS 4.1 version which still included my diskboot.kmod (Hyperion has no licence to use it in 4.1FE or any newer version of AmigaOS) or the Enhancer Software, which might include a legal version of it as well.
|
|
|
|
Re: The ADTOOLS thread
|
Posted on: 3/31 18:36
#75
|
Just can't stay away 
|
@afxgroup Quote: As I wrote I can confirm that shared objects are working but since the elf.library has not been released is not possible to test them. Whatever the problem with old elf.library versions was, as far as I understood it the beta versions of it you are currently using don't call the __shlib_call_constructors() and __shlib_call_destructors() functions of shared objects any more but use something else internally. That's OK for some beta testing, but of course such extremely broken versions of elf.library must never be used for any public release.
|
|
|
|
Re: NVMe device driver
|
Posted on: 3/31 15:15
#76
|
Just can't stay away 
|
@geennaam Quote: Such results are to be expected with diskspeed. It is using small transfer sizes of 512 bytes up to 128k. Buffer sizes are configurable with the BUF1-BUF4 arguments. Quote: I was not able to get scsispeed running. Try something like scsispeed DRIVE=nvme.device:0 BUF1=65536 BUF2=262144 BUF3=1048576 BUF4=16777216 Quote: Not sure why it is called scsi speed but I don't implement scsi commands except for acquiring drive geometry. It's an OS4 port of a very old program (1989-1992), from a time when SATA didn't exist yet and everyone was using SCSI controllers and drives in Amigas (except for onboard A1200/A4000 PATA IDE maybe). Sources are included, it doesn't use HD_SCSICmd but CMD_READ. It seems there is a newer version (4.3) on Aminet than on os4depot.net (4.2): https://aminet.net/package/disk/misc/diskspeed
|
|
|
|
Re: NVMe device driver
|
|
Just can't stay away 
|
@tonyw Quote: Tried that some years ago. Added to AmigaBoot the ability to read a single file containing all the Kickstart modules. Problem was that reading the microSD card is slow, slower than reading a physical disk. Best improvement I could get was about half a second (in 35 sec), so the idea was abandoned. Using a 7-zip LZMA2 archive with the kickstart files, or if LZMA2 decompression is too slow in U-Boot (IIRC CPU caches were disabled in U-Boot on the AmigaOne SE/XE/µA1 and therefore using strong compression was way too slow, no idea about the Sam4x0 and X5000 versions of U-Boot) maybe LZHAM could help?
|
|
|
|
Re: Task scheduler
|
|
Just can't stay away 
|
@msteed IIRC it's the number of IExec->Forbid(), Disable() and SuperState() calls per second. Just like the way too high number of task switches per second on a nearly idle system (usually caused by lots of 0.0x second timers, you can use for example Scout to check the timer.device list) in the screenshot those are calls with some, or even a lot of overhead which shouldn't be used that often. It's no big problem on NG systems, but one of the reasons for implementing the "top" program was to find out what made AmigaOS 4.x extremely slow on A1200/BlizzardPPC. For example AmiDock caused hundreds of task switches/second even if no Docky was updating anything, resulting in about 20% CPU usage on an "idle" BlizzardPPC system.
|
|
|
|
Re: NVMe device driver
|
Posted on: 3/27 16:47
#79
|
Just can't stay away 
|
@geennaam Quote: Q: Yessss!! I will get >Gigabyte/seconds transfer speeds, right? A: Erm, no. NVMe is designed for multithreaded and streaming access. Streaming as in: It takes some overhead to setup a transfer but then we get this train moving. You will most likely use NGFS. This filesystem is single threaded and limits the maximum transfer internally to 32kbyte-128kb. The overhead needed to start those small transfers will kill speed in a single threaded environment. The 128kb limit should be fixed in NGFS, even for SATA that's way too small. Quote: Q: I don't believe you, I will play in media toolbox with blocksize, buffers, Maxtransfer and the Mask. Or? A: Nope, these settings are completely ignored by PPC filesystems. Buffers and blocksize are still used, at least by FFS and SFS. Using a lot of buffers, for example 5000 or 10000, can make a big difference, for example reading directories, or the bitmap when the file system is searching for appropriate space for new files, especially if diskcache.library isn't used. All those tiny, single block reads(*) will cause a lot of slowdowns, on PATA/SATA/SCSI HDs as well, but maybe even more on NVMe. Buffers are only used for file system meta data blocks (directoires, bitmaps, file extents, etc.), they aren't used for the data blocks of files (except on FFS DOSTypes DOS\0, DOS\2 and DOS\4, i.e. OFS). With FFS files are much faster when using lager block sizes, but directories are slower (each directory entry uses an own block) and it needs more space on the disk. SFS is slower when using block sizes larger than 512 bytes. *)Even without using diskcache.library there is still a read-ahead cache inside of SFS, but at least the (configurable with SFSConfig) defaults, and IIRC the maximum supported as well, are much smaller caches and cache line sizes. Quote: Now don't start using SFS/02 just yet because NGFS is a lot faster with small files. NGFS should always be faster, but especially with a lot of small operations since it's using the new AmigaOS 4.x file system API. SFS is still using the old AmigaOS 1.x-3.x packet based API. The AmigaOS 4.x/PPC version of SFS tries to avoid the task switching overhead of it, by executing the file system code inside the task of the application using dos.library/file system functions instead of the file system task, but even that way the old packet API is much more overhead.
|
|
|
|
Re: NVMe device driver
|
|
Just can't stay away 
|
@geennaam Quote: Don't bother with diskspeed. This benchmark applications insists that my SATA drive is capable of >400MB/s. So it must be measuring some kind of cache memory. For example if you use it on a SFS partition with diskcache.library enabled the results you get are basically just the IExec->CopyMemQuick() speed ... Use something like http://os4depot.net/?function=showfil ... /benchmark/scsispeed4.zip instead.
|
|
|
|