AI Accelerators — Part V: Final Thoughts - Adi Fuchs ![rw-book-cover|200x400](https://miro.medium.com/max/898/1*TpPJN15yzJynX_UuD8iE-A.png) ## Metadata - Author: **Adi Fuchs** - Full Title: AI Accelerators — Part V: Final Thoughts - Category: #articles - Tags: #ai-hardware #hardware - URL: https://medium.com/@adi.fu7/ai-accelerators-part-v-final-thoughts-94eae9dbfafb ## Highlights - My bet is that over the next 2–3 years, we will see the continued disaggregation of ideas and solutions, but around 2024–2025, things will dial down, and we will see both research and commercial AI starting to converge to a handful of existing solutions and best practices, with about 3–5 accelerated computing companies leading the pack. ([View Highlight](https://read.readwise.io/read/01hn20sph95jd8ny2qm1yxv66c)) - Be mindful of good architectural foundations based on solid research and long-term vision, and think about as many details of the target application space as possible ([View Highlight](https://read.readwise.io/read/01hn20ybs8rte2venj8y5vhn50)) - Often, you might find that the chip is busy doing other things like synchronizing between different computation units, fetching data from the off-chip memory, or communicating data across units and chips. To increase utilization, we need to avoid these overheads by building a sophisticated software stack capable of predicting and minimizing the impact of these hardware events for all real-world scenarios, all different neural architectures, and all tensor shapes. That’s why many AI “hardware” organizations have at least as many software engineers as they have hardware engineers. ([View Highlight](https://read.readwise.io/read/01hn4q6hry7jy57rt0xbmsdc8v))