⚠️ This post links to an external website. ⚠️
Recently benchmark for concurrency implementation in different languages. In this article Piotr Kołaczkowski used Chat GPT to generate the examples in the different languages and benchmarked them. This was poor choice as I have found this article and read the Elixir example:
tasks =for _ <- 1..num_tasks doTask.async(fn ->:timer.sleep(10000)end)endTask.await_many(tasks, :infinity)And, well, it's pretty poor example of BEAM's process memory usage, and I am not talking about the fact that it uses 4 spaces for indentation.
For 1 million processes this code reported 3.94 GiB of memory used by the process in Piotr's benchmark, but with little work I managed to reduce it about 4 times to around 0.93 GiB of RAM usage. In this article I will describe:
- how I did that
- why the original code was consuming so much memory
- why in the real world you probably should not optimise like I did here
- why using ChatGPT to write benchmarking code sucks (TL;DR because that will nerd snipe people like me)
continue reading on hauleth.dev
If this post was enjoyable or useful for you, please share it! If you have comments, questions, or feedback, you can email my personal email. To get new posts, subscribe use the RSS feed.