I just saw one of Intel’s new commercials. It claims 98% of the Cloud runs on Intel. I have no reason to doubt that, but it did bring me back to thinking about Apple.
With each passing year Apple introduces newer and faster A-Series processors. They’ve also introduced a new recycling program. When they receive phones, or other devices, through the program they take them apart. Why not use those old processors?
More What Ifs
I know, all I do is ask questions, but it’s fun to ask these types of questions. Why doesn’t Apple go about building servers using older tech? That’s right. Take the components pulled from, say, an iPhone 5s and put them to use in a small blade server that accepts daughter cards with a few A7 chips on them?
Think about running a stripped down version of macOS, or a pumped up version of iOS, on these servers. We know the two OS’es share a common core. Build some experimental hardware that is scalable by adding more cores via daughter cards(blades?) and see how they perform when used as web servers. Could you still serve up expected performance? I don’t know, but I’d imagine most things are I/O bound, network bound, or bound by poorly written software.
I know Apple doesn’t really care about server hardware, and why would they, it would be another fun thought experiment to create something like this. Why not, Apple has the money to spend on some fun and potentially useful technology that is also good for the environment.
Business Insider: “Hardware engineers across the industry are using OCP to ask each other questions. ‘It’s hard to get even two companies to work together. Weâ€™ve managed to get couple of hundred companies to work together and to let engineers be engineers.'”
You can read more about Open Compute here. It’s fascinating.
Guy English: “Thereâ€™s only one CPU socket and it bets heavily on the bus and GPU performance. While this looks to software to be just another Mac, it isnâ€™t. Itâ€™s capabilities arenâ€™t traditional. The CPU is a front end to a couple of very capable massively parallel processors at the end of a relatively fast bus. One of those GPUs isnâ€™t even hooked up to do graphics. I think thatâ€™s a serious tell. If you leverage your massively parallel GPU to run a computation that runs even one second and in that time you canâ€™t update your screen, thatâ€™s a problem. Have one GPU dedicated to rendering and a second available for serious computation and youâ€™ve got an architecture thatâ€™ll feel incredible to work with.”
At my day job I work on an SDK that allows people to embed video in their applications. The SDK lives on an awesome framework developed by our Systems team that is portable and allows us to create plugins that can process media and push it down the pipeline. That pipeline includes plugins to receive data from the network, decode that data, time it, and render it to a portion of a display. It can do this for live and recorded video, MPEG4, H.264, and even low frame rate JPEG video (so we don’t have to decode frames on the client.) But, I digress. If you notice, I mentioned decoding. We’ve looked at decoding with hardware but it’s actually quite expensive to push encoded frames across the bus, decode them, push them back across the bus, and finally render it, which pushes it back across the bus. Ick.
At one time Pelco had built its own combo card that could decode video and render to the display with a single push across the bus. That was a cool piece of hardware. At the time we could decode and display sixteen separate video streams simultaneously, at varying frame rates. That card was extremely underpowered. I guess what I’m getting at is this: How cool would it be to leverage one GPU on a Mac Pro for decoding all video, be it one stream or sixteen, and push the results across to the secondary GPU for rendering, without a transfer back across the bus to main memory? The idea of it seems very exciting.
Now all we need to do is build our pipeline for Mac OSX(totally doable) and create a new decode/render plugin that takes advantage of the new GPU. I’m not sure if its totally possible, without multiple bus transfers, but it would be fun to try.
Mashable: “Video pros will probably be excited to see the new Mac Pros, the strongest of which now sports a drool-inducing 12 cores (thatâ€™s 24 virtual cores if you count Intelâ€™s hyper-threading technology). If youâ€™re so inclined, you can bump up your Mac Pro with a 512 GB SSD, as well as ATI Radeon HD 5870 with 1GB of memory. If you have to ask, the price starts at $4,999.”
Emphasis is mine. I’m pretty sure most folks will be talking about the Magic Trackpad today, so I thought I’d go right for the power hungry gear head in you. I think I could find a use for 12 cores, like compiling code. Heck, that’s so fast the code would compile before you clicked the button.