The Audit

The Next Era of Computing and Processor Technology

August 21, 2023 IT Audit Labs Season 1 Episode 24
The Audit
The Next Era of Computing and Processor Technology
Show Notes Transcript Chapter Markers

The Audit - Episode 24

In Part 1 of the Tech Lessons Series by Bill Harris, prepare to be transported into the future of computing resources, with our fascinating guest, Bill Harris from IT Audit Labs. We're opening up the world of processor design and specialized workloads, discussing the intricacies of chip fabrication, the genius behind improving processor speeds, and the art of creating modern processors. Get ready to discover a realm of substrates, lithographies, and elements that form the backbone of future processors.

Ever wondered about the application of Moore's Law in real life, or what really behind processor clock speeds? This episode answers all that and more, bringing in exciting insights into the clever tactics used to amplify modern computation. Dive into the mechanics of how assembly is utilized to build processors and learn about the advanced technologies such as 3D NAND, chiplets, and SSL acceleration that are revolutionizing the field.

As we look forward to the future of computing and the exciting investment opportunities it presents, we delve into the potential of semiconductors, the massive CERN particle collider and the intricate challenges of breaking into the semiconductor industry. Don't miss out on our spirited conversation on the potential of DNA and crystalline molecular storage, and the role of quantum computing in enhancing processor speeds. And remember, amidst all this tech talk, the importance of security, risk and compliance controls to safeguard our clients’ data remains paramount. So, buckle up and come along on this exhilarating journey into the future of computing!

Speaker 1:

You're listening to the audit presented by IT Audit Labs. Thanks for joining us and welcome to today's episode of the audit, where we will be discussing classical computing and what will be shaking up the industry beyond 2030. Join us on a journey with IT Audit Labs own Bill Harris to learn more about the future of semiconductors, advances in lithographies and what comes after silicon. You're not going to want to miss a moment of this information packed episode, so make sure to listen until the end. You can stay up to date on the latest cybersecurity topics by giving us a like and a follow on our socials and subscribing on Apple Spotify or wherever you source your podcasts. More information can be found on ITauditlabscom.

Speaker 1:

Well, we are here today to talk about the future of computing resources or, I guess, the future of compute. We've got Bill Harris joining us, so thanks, bill, for coming on and putting together this presentation. I've had the chance to hear it one time before and it's really awesome. Thank you for doing that. This is part of a three-part series where you're talking about quantum and then the future of storage as well. In other podcasts, we've got Alan Green with us, so welcome, alan. Alan, you are from where?

Speaker 3:

I'm from a company called Fair Isaac Corporation, more lovingly referred to these days as FICO. We are known for your FICO credit score in the market right, but we do more than that. We also sell software, and my role is I'm Senior Director of Infrastructure for the software division, and so I own that responsibility globally Awesome.

Speaker 1:

So as an employee there, do you have the ability to adjust your score a little bit? How does that work?

Speaker 3:

I wish I could, but they keep me far away from the data Gotcha.

Speaker 1:

And then we've got Nick Malamon, a regular here hosting the podcast with us. So thanks, nick.

Speaker 2:

Yes, sir, happy to be here.

Speaker 1:

We're going to learn some stuff today, and Nick, Bill, Alan and I go back a little ways from a previous organization that we all used to collectively work at named Thompson Reuters. And Alan and Bill, did you guys spend much time working together? Were you in different divisions?

Speaker 3:

Well, we were in different divisions. Bill had the sweet spot in the architecture space and I was more so on the operation side, but we absolutely interacted.

Speaker 4:

A little bit afterwards too, yeah.

Speaker 3:

Yeah, admittedly yes.

Speaker 1:

And you guys share a common hobby. And, alan, I saw you drinking from a red solo cup. What was in that, alan?

Speaker 3:

Ice tea, my friend, Slow lice tea to keep the pipes well lubricated for the conversation today.

Speaker 1:

Otherwise known as bourbon and Bill. You are a. You're a aficionado as well, are you not?

Speaker 4:

I am, and two nights ago I had one of the worst Manhattan's that I had in Manhattan. So if you go to a show on Broadway and you go pony up to the bar, don't trust what they're going to put in that drink. It's not well trained, they really aren't.

Speaker 1:

Was it something off the rail? What was?

Speaker 4:

it. No, they put in too many bitters. So I watched the guy you know put in like a dash or two of bitters. I watched this guy like dumping, like you know eight to 10 dashes so it didn't come out right and, nick, are you a bourbon guy also?

Speaker 2:

I am and I would say I'm pretty strict to only drinking old fashioned. So I'm understanding what Bill is saying with the bitters and be careful with that.

Speaker 1:

Awesome. Well, we'll leave Alan to his tea and let's jump into this, this presentation. Bill, you want to give us a little backstory on this, how you, how you got started, how you put this together and then take us through it. It's pretty exciting.

Speaker 4:

Sure so. So my field, one of the things that I focus on, is futures technology futures. I've got a pedigree in storage and compute, so I found it pretty easy to latch onto these things and focus on what's coming up in the future. And and I don't I don't really care too much about the software aspect of it, so really I'm more concerned about the hardware and the physics behind it. So, consequently, I've had opportunities to connect with some of the advanced labs people from IBM, from EMC, from HP, some of the biggest companies and I've been able to visit their, their advanced labs and talk with the scientists behind this stuff and pick their brains a little bit from a non-product perspective, about what's coming up and how it's going to shake the industry. For me, that's really interesting and that's why I'm here today is to share it with everyone else. So for today's agenda, I'm going to talk about the future of compute, from current day up until around the mid 2030s. Our agenda will be in five parts. First, I'm going to introduce everyone to compute foundries, where these chips are fabricated, what the challenges are today and what we're doing about those challenges. It's important to understand where the processors are made and from that we'll get about a better understanding as to what's coming up. We'll talk a bit also about what makes a processor better, so getting beyond those foundries. What are the scientists doing today to improve processor speeds, processor yields and keep things competitive in the decade ahead? That will necessarily lead us into a conversation around the processor design and some of the constraints that we have to deal with, as well as how we're overcoming some of those constraints, and we'll talk about some of the innovative things that are getting introduced to processor design today and in the near future. We'll also talk about the physics and the chemistry that go into the processors. So it's not just a matter of how these things are assembled, but right down at the atomic level, what are the physics of today's modern processors? Talk about substrates, talk about lithographies, talk about the elements that are employed to build processors now and in the future, alternative elements that they're looking at, and then we'll wrap it all up and we'll talk about really kind of what this means for the industry and what we might expect to see in terms of some of those benefits and some of the challenges that lie ahead.

Speaker 4:

So, first off, we'll discuss about where the processors are made, and it's important to start here as we look at the five big foundries. Now, 90% of the world's processors are made in these five foundries. First up is TSMC. That is a Taiwan Semi-Conductor Manufacturing Company, and that's the name implies. It's based in Taiwan and they produce 54% of the world's microprocessors today from AMD, apple, nvidia, other types that are in there.

Speaker 4:

The number two is second, which is Samsung, at about 17%. This is a South Korean company and they develop processors for NVIDIA, for IBM most notably. Umc is number three, also a Taiwanese company, producing processors for Texas Instruments and Realtek. Umc is about 7% and they're tied with global foundries, which was founded in 2008 or 2009 by AMD and then subsequently spun off. Amd wanted the capital and they also wanted to focus really on being a fabulous company, so AMD used that capital to really great effect. But now global foundries exist independently and they're the only ones only major foundry base here in the United States. They produce processors still for AMD and for Qualcomm. And then, rounding out the top five, is SMIC. This is the first Chinese entry and they produce processors for Qualcomm, texas Instruments and Broadcom, among others.

Speaker 4:

So you might see the problem here, but as you look at these five foundries, the issue is that four of them, and approximately 83% or so, are based in Asia. So that creates a problem, because any type of political or financial issue in Asia will affect worldwide supply, and that has national security implications, it has implications for the world stock markets and so on. It's just a lot of risk to have in one place, especially when you consider the strife that's happening between China and Taiwan right now. So, in an effort to ameliorate some of that, the Biden administration introduced in 2022, the CHIPS Act, which encourages companies to build fabrications here in the United States, and it will subsidize their efforts to do that. So, consequently, a number of companies have stepped up on that. Tsmc is building a fab in Arizona, intel in Ohio and Micron is looking at New York. So building some of those processor factories here in the US will, I think will help stabilize some of the risk that would otherwise be present Bill in the top five.

Speaker 1:

there I don't see Intel. I know you say they're building one out in Ohio, but where are those manufactured today?

Speaker 4:

So I think I'm not mistaken. I think Intel has one in New York, but Intel's not in the top five, right, so they have. They are working with other foundries here to produce.

Speaker 1:

And when you say CHIPS, it's any of the processors that we might see on a motherboard or in a phone or in a car, or just the hundreds of thousands of areas that these CHIPS could be in. And, as I understand it, you could have a device that has CHIPS from multiple different manufacturers. That's kind of the IoT problem, right? So you could have CHIPS from all of these different foundries in one device.

Speaker 4:

Yeah, absolutely yeah. This term CHIPS here really covers the whole gamut, Because when we think about processors today, we might think about the stuff that's in our computer and that's actually, in today's world, a minority of the processors that are out there. You correctly named all the ones in your phones and the one in your refrigerator and the one you have in your doorbell and so on. It really it's just a ton of stuff out there, and so this presentation will really kind of talk about all of them, but it'll be focusing more on sort of the higher end processors, the type types that you see in enterprise spaces for the most part.

Speaker 3:

So, Bill, I have been led to believe that the reason most of these CHIPS manufacturers were in that region was due to two factors the product required to build the CHIPS was abundant there and labor was inexpensive. Can you comment on whether or not that's factual?

Speaker 4:

Yeah, I think it is factual. I think the labor certainly is inexpensive. That'll probably be your number one reason. Number two, to an extent as far as the product being readily available there. Yes, the product is available there. It's also when you talk about fabrication labs it's a very sticky thing too. So, once you have the infrastructure in those places, that's going to be kind of where you tend to continue to put it. With that said, though and I'm going to get to this in just a moment we're going to talk about silicon and just how abundant silicon is. So you're going to be able to find silicon, which is really the primary ingredient in these processors, being enormously abundant across the world. But, yeah, I would say that what you're saying of there is generally true, but that doesn't rule out from putting it any other place.

Speaker 3:

Okay, thank you.

Speaker 4:

So I want to talk a bit about how scientists today are improving CPUs over time. Now, when I talk about a CPU, literally speaking the central processing unit, the main processor that gets things done, including any sub processors, your ARM chips, your FPGAs, etc. It really kind of includes all of this. First and foremost, they're trying to make them smaller, and this is done for a number of reasons. First of all, smaller CPUs really just cost less in terms of materials. They also generally draw less power. You need to feed a smaller CPU usually a little bit less voltage than you do for a larger one. Because you're trying to push the electron to a shorter distance, they tend to produce a little bit less heat and they're certainly going to result in a lot less latency, again because of those speed of light issues as you're firing electrons down the path. Important to stay here that when we're talking about smaller lithographies, you may see some of the manufacturers talk about well, we've got a 7nm process or a 5nm process. These are not comparable among manufacturers directly, so these have really become marketing terms. So, for example, intel's 10nm process fits 100 million transistors into one square centimeter. Tsmc's 7nm process does about the same thing. So there's just a difference in how they're building those lithographies that results in that type of a sort of this nuance in the way that they name them.

Speaker 4:

Now, looking beyond just the reduction in lithography and making things smaller, I also want to point out that Moore's Law has become pretty famous at this point about how, every two years, number of transistors on a microchip will double still lives. Its death has been touted for a while now for several years I think but it kind of keeps pushing on and kind of going past the next level. I think it's got another couple of years ahead of it yet before we have to reevaluate that for reasons that will become apparent In addition to just reducing the processor to get the benefits that I just talked about, one of which is to get less latency out of it. I also want to point out that, by and large, processor clock speeds have plateaued, and this happened like 15, 18 years ago at this point where it started to kind of creep up around that 5 GHz mark and has roughly stabilized around that area. And that is just because, as things have become a lot smaller, pushing things much over, that has just produced a lot of heat and a lot of other problems that we'll talk about later, about the way that electrons flow through the silicon. It's become problematic as it runs a whole lot faster. Interestingly, the speed crown for today belongs to an Intel Core i9-13900K.

Speaker 4:

On liquid nitrogen, they've got that to run at about 9 GHz, not particularly practical in the real world. Most residential applications are not going to be cooling with liquid nitrogen and you'll find it in some enterprise applications, but there are risks associated with liquid nitrogen, including leaks. It's expensive, it's kind of dangerous, so it's not really a practical solution. It's just an interesting thing, but you generally won't see things go much higher than, say, 6 GHz or so via conventional cooling methods.

Speaker 4:

And finally, I want to call out, in addition to the densities and the clock speeds and the latency, one of the other things that they've done to push processor technology forward is to play clever tricks in it. So one of the things they can do is to improve the number of instructions per clock. Most modern processors today can do multiple instructions per clock, so it's not just one. Instruction sets also matter a lot, and you'll see these in today's processors in the form of SSL acceleration, in the form of AES256 acceleration, in which you have a specific instruction set on that processor Calculate the cryptographic math necessary to encrypt and decrypt something like AES, or. Similarly, you'll see instruction sets on modern processors that are really focused on virtualization, so they can hook into some of the Hyper-V calls or some of the VMWare calls and accelerate those functions.

Speaker 1:

Where would be a real world example of that AES type of encryption or decryption?

Speaker 4:

Well, it's in pretty much all the modern processors today. So AES is pervasive, to say the least, today. So nearly every modern proc today can handle that. If you have an enterprise array or even a hard drive at home that you want to encrypt with, say, bitlocker or some other method, you'll find that the speed at which that processor can handle that encryption is a whole lot faster than, say, some other types of encryption that it may not have an instruction set for. So there's a lot of use cases for it.

Speaker 4:

I think, as people become increasingly aware of security and privacy concerns around their data and their beginning to encrypt it more and more.

Speaker 4:

And then assembly is also going to matter a great deal, and I'll talk about that on this very next slide, where I'll get into how they build these processors in innovative ways that really start to kind of stretch the limits of what we've seen.

Speaker 4:

So in terms of assembly, it used to be that they would build like scientists would focus on the processor from end to end. There would be an X and a Y axis, and that was difficult enough. When we're talking about circuits at the microscopic level and there are billions of them that's kind of a difficult feat, but they've gotten so good at this and their fabrication techniques have become so precise that they are now able to build up the chip more and more. We're seeing this in things like 3D NAND. A lot has been talked about with 3D NAND, and the reason that 3D NAND has become so dense is because they're now taking that into that third dimension and they're building up the chip and they're stacking these NAND layers on top of one another. So now, instead of getting, say, maybe, like a 16 gigabyte NAND chip, you're able to get up into terabytes of size.

Speaker 1:

What's NAND?

Speaker 4:

Oh, this is the flash. This is the type of storage that you use in your flash drives. Okay, interestingly, most NANDs also produce in Asia, so that'll be another conversation. But the way this applies now to the processor design and I'll give you a good use case on this one is AMD has introduced in its previous generation and now into its current generation of what they call the 3DX chip or the X3D chip, in which they're putting cash, a significant amount of cash, right on top of the main processor, and this has enormous, enormous benefits in workloads that can really use cash. So this is for workloads that tend to recall the same information over and over and over again, so it doesn't have to go all the way back out to main system memory, just grabs it from the cash that is now bountiful sitting on top of the proc. So it works more with gaming than it does with, say, like a completely random relational database. But gamers have really lashed onto this because they can get a whole lot more frames per second in their games, and that's now driven, I think, more investment in the gaming industry with the likes of NVIDIA and ATI. So that's really been a boon to the performance of these chips.

Speaker 4:

Another trick that engineers are putting into their processors is around chiplets, and so chiplets is a very specific chip or a wafer that is designed to perform a specific function, and you can generally produce these at greater scale because they do one thing and they do one thing really well. Think of it as just a very specialized chip, and you can put chiplets together to form bigger processors, and so you'll find chiplets now in a lot of the, in a lot of the world's leading processors, and they kind of come together to do something. Bigger Chiplets also give you a better yield because they are simple. The manufacturers end up throwing fewer and fewer of them away because they tend to come out right the first time. And then finally, it's a little bit of a tweak on some of the specific instruction sets I talked about.

Speaker 4:

But building processors for accelerated workloads and one of those is getting a lot of press recently is artificial intelligence. So, as an example of this, ibm introduced their telem chip on the Z16 mainframe. The telem chip is an expert with AI inferencing and it does that workload exceptionally well. In most use cases it will rival or it will beat GPU computing for artificial intelligence. It's purpose built to do that, so it's really really fast. We're actually seeing a lot of these types of innovations on big mainframes of supercomputers before they leak down into the smaller microcomputer segments. It would be a mistake to think that mainframes are old or somehow dinosaurs, because that's not the case at all. We're seeing a lot of performance there and a lot of really cool things happening in that space, so it's a very innovative platform.

Speaker 1:

Bill, I know we're going a little bit down the rabbit hole here on the chips and the design, but maybe we could just back up for a second and talk about these specialized workloads. So you had mentioned GPU, which would be the graphical processors that we see on gaming chips, and one of those companies that's in the news recently is NVIDIA for making those graphic cards. It sounds like there's an AI play using those cards, which is why the stock has taken off a little bit recently. But how do those specialized workloads? Sometimes, when we think of a chip, kind of looks the same, but how is the chip designed to handle one type of workload versus another?

Speaker 4:

So all of the chips work with a system underneath of it. It's not just a series of just meaningless circuits kind of banded together. So it works with the motherboard and it works with the chipset underneath of it to employ a language, and the processor and the chipset come together to build that language that can be customized to those specific workloads. And that's the language in there, the machine language in there that has instruction sets built for very specific workloads. Some of the instructions can then short circuit in a good way, short circuit some of the components in the chip. So, for example, instead of one of the routines having to go to the chip and ask for a memory fetch and the chip has to go to its registers and figure out where that data is, one of the instructions can be well for this particular workload. You don't have to go through the whole process. You can just go directly to memory because I know where that data resides.

Speaker 2:

You can use it in like IoT, like Alexa could have something cached, or Siri, absolutely, yep, absolutely.

Speaker 4:

It has great uses for things like IoT. So what I just described there was direct memory access, so it's those types of workloads that allow some routines to bypass a complicated mess of circuitry to accelerate what it needs to have done.

Speaker 1:

It used to be, and going back a couple of years when the Bitcoin mining or coin mining started. You'd have your spare PC at home and you could dedicate that to mining bitcoins, for example, which essentially is solving a complicated math problem and trying to do it faster than other computers are doing it that are connected on the internet. And then I think somewhere it pivoted, maybe eight years or nine years ago, to GPU based mining because that was faster than the typical CPU of a computer to mine those coins. And now I think there's even more of a specialized miner that you can buy that people put into mining farms. I think one of them is a dragon mint it's the name of the unit, but it seems that's the specialized workload that you're talking about in that computer wouldn't be good for word processing or even gaming because it's so focused or specialized on just coin mining.

Speaker 4:

That's right. This is a good example. Other examples that I'll get into in a future conversation will be quantum. Quantum is good at extremely specific things and nothing more. Quantum is a lousy solution for a general purpose compute solution, but you give it a particular algorithm and it can turn through that one algorithm more quickly than anything else in the world can. So, yeah, those are all good examples.

Speaker 4:

So now we get to the semiconductor. So, really, what sits at the foundation of our processors today is silicon. Silicon is the second most abundant element on the Earth's surface. The first one is oxygen and, being so abundant, it is very cheap to gather. It's one of the non-metal elements and it's a very good conductor and an insulator at room temperature. So that's what makes a semiconductor Is you apply voltage and you can make it a conductor or an insulator. Now we're going to continue to see silicon through this decade easily and into 2030, but I'll talk a bit about what happens at the end of this decade, the sheer mass of silicon and how much we are dependent on it today. It means it's going to be around for a while and we really perfected it.

Speaker 4:

However, some problems appear on the horizon. Silicon is really approaching the minimum size that we can deal with it within a processor type of a construct. So a silicon atom itself is 0.2 nanometers or two angstroms. A few years ago, 7 nanometers was approaching what a lot of folks thought would be the limit, but then came extreme ultraviolet radiation and they were able to build silicon at even smaller lithographies and make it even more and more dense. So that will continue and we're going to see this getting shrunk down farther and farther.

Speaker 4:

Silicon will probably be deployed at 1 nanometer in the next 2 to 3 years, and at that size. You see that the silicon atom itself is just 0.2 nanometers, so it's really kind of getting down to really the smallest that it could possibly get, since it's at a 1 nanometer deployment. Now the problem is at that size silicon becomes susceptible to quantum tunneling, and quantum tunneling is a phenomenon in which a particle with insufficient energy can still pass through material that it shouldn't be able to pass through by the laws of classical physics. Of course, that's a major issue for something that is supposed to be highly predictable and that has to be 100% correct, and so that's really. They will have to overcome that problem, I think, before they can really shrink it down much further than 1 nanometer.

Speaker 1:

Nick, we're getting pretty deep in here.

Speaker 2:

Yeah, some of this stuff is. The one thing that was hitting me too was the marketing from a couple sides ago, like how Apple keeps talking about oh, now our chips are 7 nanometers I think was the 813 or whatever. So there's so much stuff here that I was not aware of that we can unpack. But yeah, we're getting really deep.

Speaker 1:

Bill with the. I certainly can understand shrinking the chip down to use less energy, make it more efficient, less heat, and I could certainly appreciate that for memory, making cash bigger, which would be on chip memory, or instead of like a hard drive, which is maybe a different form of having a, it becomes denser so we can store more data on it. Right, I get that. But when you said I think it was was it maybe eight or nine years ago where we reached the maximum speed of like four gigahertz on a chip? And we think about our common business use case application of a PC or a laptop or a MacBook. How much smaller does it really need to get Like? Does a, you know, does a seven millimeter or a 10 millimeter chips that really make that much of a difference? When you know I'm running iOS or Windows or you know Mac OS or whatever it is like you know we're just doing Word and surfing the internet, how important really is that dense chip?

Speaker 2:

Doesn't it just become a heat thing at that point? Bill, right, the smaller than just gives off less heat, right, so it can be fanless maybe.

Speaker 4:

Yeah, yeah. So a lot of that all that plays into it. I would argue that if you're just surfing the internet and you're looking up recipes and sharing cat pictures on Facebook, that you don't need the most modern processor, you don't need miniaturization. You could actually run a fairly just a typical processor with barely a fan. The reason that we're seeing miniaturizations continue in processor design is they want to squeeze in more transistors into that dye, because more transistors mean more execution units and that means they can just shove through more work through that processor. So that's Moore's Law, and so you'll continue to see that miniaturization drive those big workloads that we talked about earlier artificial intelligence, deep learning, machine learning, you know, data analysis, all that's very important for those things, but you won't need that.

Speaker 4:

Most people won't need that for their home PC. You're not really going to need that so much for IoT, unless you're trying to miniaturize the device itself and then you need everything inside of it, including the ProctaVe, miniaturize. So a lot of things go into that.

Speaker 2:

So what you're saying, Bill, is if you need it, you know you need it. If you think you need it, you probably don't. You're probably good with the normal chip.

Speaker 4:

Yeah, I think so, absolutely. I think that's very true. If you really need it, you'll know. So we talked about what happens when silicon reaches that 1 nanometer level and we start to run into some problems with classical physics and we start to see some limitations there. So what comes next after silicon? How are engineers, how are scientists, looking to overcome those barriers? So they have introduced into the labs a bunch of solutions that absolutely work, which includes new semiconductors.

Speaker 4:

There's a whole list and this is a partial list here, but a lot of this really focuses around different forms of elements and doping one element with another element.

Speaker 4:

So silicon carbide is silicon that's doped with carbon. There are other carbon semiconductors listed here diamond, graphene, graphine all carbon-based and these, like I said, these exist in the lab today, but not in a method that is able to be mass produced in any type of an economical fashion. So the race is on it's races on to find the replacement to silicon in a commercially viable, mass reproducible way. It's important to note that not all of these semiconductors are built of elements that are smaller than silicon. However, they are almost all of them more conductive than silicon, which means less energy is lost to heat. They also consume less power, which is going to be huge as we go into a more environmentally conscious world. This slide after this will talk more about what's happening in superconductivity, but this is what we're looking at right now in terms of semiconductor activity, and this will probably start to take place towards the end of this decade, but I don't think you're going to see any of that really get perfected until early to mid-2030s.

Speaker 1:

Bill, could you maybe talk a little bit about where copper might come in, because I'm picturing a circuit board and I'm picturing copper interweaved within the green board itself and then chips and resistors and transistors and whatnot on the board. But where is that relationship between copper and then? Where does silicon come in?

Speaker 4:

So copper is a conductor, it is a great conductor of electricity, and so that's what it's going to do it's always going to conduct electricity. So you'll see copper used for lines that are supposed to be always carrying electricity, always carrying a signal of some kind, whereas silicon will be switching off and on between being an insulator and a conductor.

Speaker 1:

It's like I wish I'd paid more attention in that eighth grade electronics class now, Nick.

Speaker 2:

I'm just thinking that the whole presentation.

Speaker 4:

So this is where things get really interesting, right After semiconductor activity. Really, I think the holy grail of where computing is going for the foreseeable future is superconductivity. So the graphic that I'm showing you here in the upper right of the screen is, of course, the very recognizable one the Large Hadron Collider in Switzerland.

Speaker 1:

Hold on a second Bill Nick. Did you know what that was?

Speaker 2:

Well, I was coming off mute to ask, never seen this, oh, okay.

Speaker 1:

Okay, I like how Bill's casual he's like yeah, you know, everybody recognizes this. Right, we should just start showing Bill pictures of different stuff.

Speaker 4:

I've clearly had my head in this for way too long, then. All right.

Speaker 1:

So we'll have to see if you can recognize the difference between two different breeds of cats that Nick has.

Speaker 4:

So this is in CERN, switzerland, and it's just a giant machine. It's miles and miles long, and they've been using this to test superconductivity and quantum mechanics. So in superconductivity all energy passes without resistance and nothing gets lost to heat, which is huge. We've talked about heat a lot. We've talked about the loss of energy. If you can make all that go away, then you've got yourself something really special. Now the superconductivity is nothing new.

Speaker 4:

The Dutch discovered superconductivity in 1911. They were experimenting with mercury and liquid helium Again two things you should not find readily available in anyone's home. But we've known about it for a long time, and in recent decades we've begun to sort of envision some of the commercial applications for this. Problem with superconductivity, though, as you can maybe guess from the liquid helium element up there is that in order for it to work you need one of two things you need either very high pressure or you need to be very, very cold, and by cold we're talking about absolute zero cold, or as near to it as you can get. We can do this today and, as I'll talk later on, we've done it with quantum computing. Quantum computer is a superconductor. It does all of its operations at approximately the temperature of space, but it takes a lot of energy to get those types of temperatures or that type of pressure, and so it becomes a little bit of a catch-22. You can get superconductivity which loses nothing to heat, but you have spent a lot of energy just to get it cold enough to even do that. So the scientists are searching for solutions at a normal atmospheric pressure or something at room temperature. Needless to say, that is going to be very difficult to do, and if they find it, good luck reproducing it. We're probably a couple decades, a few decades, maybe four or five decades away from really being able to do that. I think you'll probably see some breakthroughs on it, but, much like the recent breakthrough we saw in cold fusion, it's going to take quite some time for that to really come around and mean anything. However, when it does, when it happens, it will absolutely revolutionize electronics, and here's how it's going to do that.

Speaker 4:

We talked about some of the consumer grade processors, enterprise grade processors today, which is a chip? It's a three-inch square, a few millimeters high, and around that chip sits all kinds of cooling apparatus. You might have big fans, you might have liquid cooling with radiators, anywhere from two to five pounds of cooling material to cool this little tiny device and all those fans and all that. Liquid cooling draws power and it takes up space and it costs money to produce. With superconductivity, if we could get there, that goes away. Now all you have is the chip. No longer do you require all that extraneous stuff around it, so yields can increase. You can use less materials. You're not producing heat. You don't need as much air conditioning in your data centers. You can run your data centers like, almost like a normal room. You don't need giant chillers, big water pumps flowing through, so it would mean it would truly shake the industry when this ever comes about.

Speaker 1:

You could get that PUE right at one, couldn't you?

Speaker 4:

Yes, to use the data center terms absolutely, your efficiency would be at one yeah.

Speaker 1:

Now, if we continue to nerd out a little bit here with Bill Nick, I kind of think everybody should have their own favorite physicist, and mine's, brian Cox. So Brian Cox is a physicist, he works at CERN in Switzerland and he's got a couple of YouTube videos that I think are particularly cool. One of them is called why we Need the Explorers, and he did a TED Talk. It's on YouTube now, but it was a TED Talk and then he did another one on CERN's Super Collider. So if you want to see some of the stuff in action, bill, I would surmise it it's going to be on this particular TED Talk.

Speaker 1:

But the why we Need the Explorers is about the spending that countries have on science education, and Brian talks about the Voyager mission, where Voyager is now reaching the edges of our solar system and had been sending pictures back along the way, and one of those pictures shows Earth as just a tiny speck of dust, and I think it's a pretty elegant passage that he reads about that particular picture. But I was on a long flight I think it was to London one time and I was watching these TED Talk videos and came across Brian and subsequently watched him and Dr Miku Kaku, who's also a physicist and talks about different technologies and things like that that are pretty interesting. But anyway, I went down even more of a rabbit hole, so I'm going to turn it back to you, bill.

Speaker 4:

Those are all good raps.

Speaker 4:

Yeah, I know those two physicists. They're a lot of fun to watch. So, to start to wrap all this up, what does it all mean? I want to be clear that silicon isn't headed out anytime soon. It's going to be around a long time. It's sticky stuff. I think it'll be around until 2030 and it'll be around after that, but around that time I think we're probably going to see some more innovative elements come to the forefront. I also think we're going to see some really cool things in processor design. We're seeing it from, I think, seeing some especially innovative things, mostly from IBM. Nvidia. Intel's lost a little bit of their shine recently, I think, but they forgot it in them, and I think they still introduced innovative things and I think they're going to come back and continue to do that.

Speaker 4:

Overall, though, I think in order to shrink semiconductors past their current state into the 2030s, you're going to need to have new stuff. We're not going to be able to get that much more out of silicon. The physics for it it's just not there. We're going to reach other barriers when some of the gate sizes within those processors reach a single atom. We can't do anything with it at that point. We can't shrink that any further than one atom. So it's going to become really interesting at that point, which will probably be in the mid-2030s or pushing into the 2040s.

Speaker 4:

I also want to be clear, too, that although quantum computing is getting a lot of press, as it should be, it's not going to address any of this, because quantum computing is designed at very specific workloads. It is not a replacement for a general purpose computing device. It's not a replacement for IoT. It's none of that. It's just something else which I'll get into at another time. But classical computing will continue to drive our information age, but we will have to find new substances to take it into the next era.

Speaker 1:

So how do we make money on this bill? What's the stock market play with this stuff?

Speaker 4:

Well, if you have yourself a few billion, you can hire the engineers and some plants to build some of this stuff. This is one of those areas where it's really hard to break into. This isn't like software, where you can write clever software and you can do it from your garage. Yeah, this is stuff that huge organizations will have to drive. I think the barrier to entry here is really high indeed, bill can you say I think it was through Switzerland.

Speaker 2:

How big is this? I think you mentioned it.

Speaker 4:

Oh it's miles long, miles and miles long, yeah.

Speaker 2:

And how big is the top and bottom here? Do we know? It just looks giant, oh yes, yes, it is.

Speaker 4:

I want to say, and I might be wrong with this, but I want to say it's probably around 30 to 50 feet across or something to that effect. Yeah, yeah, if you see, I'm not sure if you can see my mouse come across this screen here, but you can kind of get a feel for the scale for this platform. Work to the left.

Speaker 2:

Oh yeah, I didn't notice that before. Yeah.

Speaker 1:

I believe you can tour it too if you happen to be in Switzerland at the time. They're allowing tours, which I don't think is all of the time, but I know that it is open.

Speaker 2:

I wasn't aware of this. It's pretty cool.

Speaker 4:

This superconducting particle collider got a lot of press when it was opened. What was it? Maybe 15, 20 years ago, I think it was. My history is a little rusty on this one but it got a lot of attention because there was a fear that it would create a black hole that would destroy the Earth and everything around it, and I think at the time something like 98 or 99% of the world's physicists said no, that's definitely not going to happen. But it's always that holdout. It's always like that one dentist who doesn't like the same toothpaste and all the other dentists like what if that one's right? And so there was always this fear when this thing opened that it would just result in our complete obliteration.

Speaker 1:

I thought they were paid by the toothpaste companies to like certain toothpaste.

Speaker 4:

Probably. I'm not sure who's paying the scientists to speak out against this.

Speaker 2:

Oh yeah, there's certainly a lot of information here. I think we spend so much time trying to secure software and tools and help end users be better, but it's really cool to see how the light bulb is created and what we're actually securing from a holistic level. I've never been this deep with what we're talking about, so it's really cool to learn all these different items that go into it.

Speaker 1:

Can we ask you a couple other questions Bill?

Speaker 4:

You can, I will warn you. I am not a physicist, I'm not a chemist. So this is, we'll see how. I do have my limits, but go for it.

Speaker 1:

Is the Earth flat?

Speaker 4:

That's the right. Yeah, that's one of the things for if you have to ask, you're not going to like my answer.

Speaker 1:

So that's not a yes or a no. Nick, I didn't get that out of him.

Speaker 2:

I don't think we have enough time to unpack that one.

Speaker 1:

And then the last question is how long have you been a Brony?

Speaker 4:

What is a Brony.

Speaker 2:

We just learned about this, last week, I think.

Speaker 1:

But a Brony is a person who collects my little ponies.

Speaker 4:

Really, really, that's the same.

Speaker 2:

True.

Speaker 4:

Well, five and a half years then. Yeah, I'm going on my sixth year anniversary.

Speaker 1:

Cool. Well, Bill, thank you. This awesome presentation learned a ton, and it sounds like you've got what's up next. Is it quantum or is it storage? Quantum Quantum's next in storage and the storage we're going to get into things like using DNA or crystals for storage, Something like that.

Speaker 4:

Right, Talk about crystalline molecular storage, DNA storage, all of that, All of which exist today in the lab it's all proven technology.

Speaker 1:

I feel like I'm going to do some reading before I'm even ready for that presentation.

Speaker 4:

Don't come through prepared please, I'm sorry, what was that?

Speaker 2:

I got a brush up on my favorite physicist.

Speaker 4:

Oh yeah.

Speaker 1:

Absolutely. Yeah, that's right. You have been listening to the audit presented by IT audit labs. We are experts at assessing security, risk and compliance, while providing administrative and technical controls to improve our clients data security. Our threat assessments find the soft spots before the bad guys do, identifying likelihood and impact, while our security control assessments rank the level of maturity relative to the size of your organization.

The Future of Computing Resources
Processor Design and Specialized Workloads
The Future of Semiconductor Technology
Future of Computing and Investment Opportunities
Quantum and Storage Technology Presentation