What is the best way to update 100 000 rows in a table (daily price changes)?

Hello.
I have a table with ~100 000 products. Prices for all items change every day.
I see two options for updating:
The classic method is to execute ~100 000 UPDATE queries (one per product row). But how effective is this?
Delete all rows (TRUNCATE) and reinsert them in batches using INSERT (for example, 200 products in one query, about 500 queries in total). This option seems to be faster, but how correct is it?
As an alternative can use REPLACE INTO, but essentially it is the same as INSERT, only with additional overhead, because the old document is marked as deleted before the new one is inserted and the old document is not physically deleted from the table.
The essence of the task is to quickly update changing data.
If frequently delete all data, how does this affect fragmentation?
I am interested in practical experience: which approach is considered optimal? Are there any pitfalls with the delete and mass insert option (e.g., indexes, links, locks, etc.)?

I did this with UPDATE. (although a much smaller index)

REPLACE/INSERT etc, are all much more of an overhead.

… TRUNCATE has the issue you have a period with an empty index

In general fragmentation isnt going to be an issue, particulyl if you just replacing all rows. As each time the whole index is effectively recreated.

If you really just want to recreate the index every time, would recommend using the older disk indexes. indexercommand is very good for efficiently building (or recreating) an entire index. Much better than inserting peicemeal. Also has a rotate function, so can do ‘atomic’ replacement of an entire index.

… inserting in bulk into a RT index, works, but much slower.