In a similar vein, simply operating a chip as if all workloads have the same needs simplifies design but it may not be the most efficient use of the transistors that make up a microprocessor. Furthermore, modern high-end microprocessors like Intel’s upcoming Xeon Ice Lake and IBM’s upcoming POWER10 have large numbers of cores and other hardware resources; they’re effectively large systems on a chip.
So, it can make sense to dynamically configure resources based on the needs of individual applications and processes. The system configuration that’s best tuned to run a compute-heavy workload is probably different from one that needs lots of I/O or one that has massive memory needs. Therefore, high-end what to do with a computer science degree increasingly have the means to parcel out hardware resources and adjust clocks in order to best match up with the workloads.
It’s another case where, if large increases in transistor counts aren’t going to be so easy to come by in the future, hardware designers need to make sure that they’re wringing the most performance they can out of the transistors they do have.
Automating the complex optimizations across hardware and software to take advantage of features like these is an area of active research in academia, industry, and in collaborations between the two.
google 2757
ReplyDeletegoogle 2758
google 2759
google 2760
google 2761
google 2762
google 2763