An Interview with Aart de Geus, ex-CEO Synopsys
Companies mentioned: SNPS 0.00%↑
As much as we talk about chips, silicon, and the computational power behind it all, there’s a market wholly dedicated to hitting sand with lightning bolts to make it think. Scientists globally have considered this a good move. Electronic Design Automation, or EDA, is the heart of this industry. Designing a chip has many stages before it even gets to anything like manufacturing, such as designing the transistor, creating the logic structures and cells, building logical blocks, placing and routing them, and then simulating these designs at large scale. EDA gets us most of the way there, before we hit electrical and thermal simulation with true multiphysics tools.
One of the two major companies in the EDA space is Synopsys. Founded in 1986 out of logic synthesis from General Electric, Synopsys is now a public company and widely considered one of the market leaders in EDA software alongside its main rival. As part of Synopsys’ portfolio, the company also offers a wide array of IP for chip design, and in January 2024 announced it was acquiring Ansys, a multiphysics simulation company to enhance its simulation software.
One of the co-founders of Synopsys is Aart de Geus, who has stood as the longest serving US-based tech CEO, from 1987 until 2024. Aart announced he was stepping down as CEO, naming then-COO Sassine Ghazi as the new leader, with Aart transitioning to the role of Executive Chairman.
Aart has been a critical part of the semiconductor industry for the last four decades. Every major boom, bust, growth, and fad – Aart has seen it all, and has a million and one stories and explanations for this industry. The first time I spoke to Aart, probably around 2020, we spent a good part of two hours just talking about the industry. It was in 2020 that Synopsys launched its first machine learning tool for chip design aid, called DSO.ai, showcasing that in some areas the era of machine learning would help build the next wave of silicon.
I managed to catch up with Aart in December 2023, just before the handover, for his final on-camera interview as CEO. In this interview, we cover the industry highlights over the years – the growth of chips and EDA, the cycle of performance and efficiency, the systemic complexity of 2D and 3D chip design, the role of machine learning, and finally the handover to the new CEO.
The following is a video interview, and a transcription underneath. The transcription has been lightly edited for ease of reading.
Ian: You’ve been in this industry for a while - and you’ve had many awards! The Robert Noyce award, GSA awards etc. You’ve seen the major changes in this industry, from CAD to EDA. What have been the highlights?
Aart: In many ways the highlights almost always start by accidental events that put you in the middle of an opportunity. So the whole synthesis thing started whilst I was at General Electric, and GE decided to get out of semiconductors. We would potentially all get laid off, and after a lot of different pathways, finally we had the opportunity to spin out the team or a piece of the team and the technology with GE’s support. Then we introduced what was then still somewhat of a prototype [of our technology] to a market that was ripe for it. [It was ripe] because people were designing groupings of 300-500 gates maximum. So here was this thing that could do it automatically and get better results.
So it is only when you look back, many years later, that you see that it happened to be a watershed moment between CAD, computer-aided design, and EDA, electronic design automation, and the automation was really one of the things that digital design needed. Digital design, not then called Moore’s Law, but was waiting for Moore’s Law, was primed for it, and of course in order to do that you have scaling and scaling and scaling. An exponential is a wonderful curve - it’s a self-multiplying one. So we were on that curve, and managed to stay on it. But being part of that moment, in the moment, was like ‘woah, we’re doing stuff and moving fast’. In hindsight, you can see the implication - an estimate would say that, looking back today, we helped with a roughly 10 million times increase in productivity. Which adds up.
Ian: What else has 10 million times increased in that time?
Aart: In all fairness, you know, around the mainstream of this - automation from synthesis, then it had to be accompanied by the ability to automatically do place and route, and if you then have the function and the form together, and those can work together on an exponential basis. Now you have something that is really going to move fast, and that became the massive differentiator between digital design and every other form of custom or semi-custom. We’ve reaped the benefits, and actually there's still more need for us.
Ian: When you were doing the spin-out from GE, did you think this (EDA) was going to be a widely generalized productisation of design, or was the thinking more blinkered / tunnel vision?
Aart: It was so accidental, because at that time at General Electric we did gate arrays. Gate arrays are rows of transistors, and then you could put metallization on top - that became the NAND gates, NOR gates, and inverters. I had spoken to someone who pioneered this notion of BDD (Binary Decision Diagrams) which was just another word for multiplexers. So I asked a designer friend at GE - ‘Hey, can you put some multiplexers on that array?’, and he did the metallization for that. Then we gave that to the designers, because presumably if you had multiplexers, you could design some of the functions to be much more efficient. It happened to be true - I had not quite understood at that time that multiplexers are essentially transmission gates. So when you have multiple of them in a row it is not great, but the problem turned out to be that they (the designers) couldn't design with it because they didn't know how to use them. So I thought to write a program to do it automatically. To say ‘just write a program’, not having a clue that this was called Synthesis! But that's essentially what we invented, and later I discovered of course that there were other people that had done wonders already.
Ian: This industry was one of the first to coin the term ‘AI’ for the synthesis and place and route, but it was still very much heuristic AI rather than machine learning as we’re thinking about it today.
Aart: Our programme was called SOCRATES: ‘Synthesis and Optimisation of Combinational Circuits Using a Rule-Based And Technology Independent Expert System’. Easy to remember!
But it was seminal. I would like to highlight the ‘rule-based expert system’, and so now suddenly I’m a pro at AI in the early days. But often I forget to mention that you would add rules and it’d get better, then you add more rules and it’d get better, then you add more rules and it didn't work so well anymore. This is because of conflicts, and of course it has gone through many evolutions. Now, fast forwarding all the way to today, of course AI is all over this. It is involved in machine learning, but also the entire design flows.
Ian: So going from the 100 gates, 1000 gates, and now we’re dealing with billions if not trillions of transistors - how have you seen the attitude to chip design change over that time?
Aart: The attitude? The attitude hasn’t changed one bit. Tomorrow is a lot harder than yesterday, and we’re going to go for it. That has not changed. I would say that for a long period of time the exponential, maybe not exactly the way Gordon (Moore) had stated it, has essentially continued. In a minute, I'm sure we’ll talk about the fact there is a whole new exponential on it is way now, but there are some phases.
If I look at the circuits of then, and I happened to find a few of them in my drawer on paper, the first thing I realised is that there was only two metrics. Those two metrics where the number of gates, and the speed of the circuit. The fact it was just the number of gates, and not the layout, tells you something about how tightly these things were packed - or not. Then of course it became area versus speed, and it was really towards the end of the 90s that power became more and more important. Then of course by the time of the early 2000s, we now have smartphones - or any phone as a matter of fact. There's this thing called battery life, and we barely remember that when you could do a phone call for an hour, it was amazing. Now it’s like ‘no no, I want to look at the internet for 5 hours in a row and play games’ and god knows what else. That is an enormous change, and it is possible both through an incredible increase in speed, complexity, and power management. I will fast forward and say right now that for the next 20 years, power is going to be the single biggest issue at the most micro level, at a thermal level, and at the humankind climate level.
Ian: I often see chip companies, they go through a rather cyclical nature of performance, then efficiency, then performance, then efficiency, and repeating. It feels like we’re going through another one of those efficiency scaling cycles right now.
Aart: Another way of describing that the same way is that the first one in the series is performance, and then you go after efficiency while you're working on the next generation of performance. This is because the generations are invariably driven by what gives you the most marketable differentiation - not to belittle the efficiency, because the cost has always been key if your device is just better. The rate of change is so high, and it is what determines the leadership of Moore's law. The most challenging, and I’d say at the same time most motivating, is how do we hold onto Moore’s Law, or whatever the equivalent is.
I use Moore’s Law, this notion of exponential, as really the characteristic of 50 years of mankind. There have been other ‘Moore’s Laws’ in the past, and so in the 1500s, book printing, in the 50 years after, arrived somewhere around 20 million books, but those books changed humanity. Then of course you can do that with the industrial age as well - it has similar characteristics. This one has sort of gone on for a while, and the funky thing is as the traditional Moore’s Law is slowing down, but not stopping - because it is amazing the new technology still coming out. Meanwhile there is this whole different Moore’s Law that says ‘I do actually want 1000x more transistors’, and the way we’re going to get that is by putting chips so close together that you wouldn't believe it.
Ian: The worry is that we’ll run out of power before we hit that!
Aart: The whole point is that we use these chips to use AI, and AI is going to resolve all the problems in the world!
Ian: Ah the universal solution!
Aart: Exactly! But therein lies the fallacy because the computation needed is going to be enormous for this, and the hunger will be enormous for this because this does bring differentiation. Already now all the hyperscalers are trying, as fast as possible, to secure their own sources of energy. In all fairness, they try to go green as much as possible, and so, the net neutral push will be in our industry, continuing at a feverish pace, because of course we are electrified - so that's already the common vocabulary of energy, right?
Ian: One of the recent trends is this chipletization, an open chiplet marketplace, whatever you want to call it. I remember back in the late 80s and early 90s, a very similar thing was being thought about - then everyone went into monolithic. Now we’re breaking out into chiplets again. How is this era of chiplets different from back then?
Aart: Personally I don't think we quite made it to monolithic - we never put power devices on the same chip, we never did analogue mixed signal. But trying is always good. We didn't do memory on logic chips, though at the same time now we find that the memory is found next to logic, so which way did it go?
But you’re pointing out an interesting direction - the notion of being able to bring things that are heterogeneous, meaning different in the technologies, and doing what it takes to make them successful. It is a fantastic opportunity. I like to call this next exponential age the ‘Sys-Moore’, for systemic complexity, in contrast to the classic Moore’s Law, which is scale complexity. I know the scale complexity demands a lot of systemic understanding, but fundamentally it was about more transistors, more transistors, more transistors, then bring the cost down, then you glue them close together on one chip. Now we have multiple chips, so right there you have a brand new set of dimensions to play with. You have the heterogeneous capabilities, and the single thing that is most relevant in making that transition possible is actually the unbelievable progress in the last 20 years of connectivity between chips. Now it is far from resolved - there's a lot of thermal this and that somewhere along the way. But the fact is that nonetheless there has been a 100x, 100000x improvement in pin density possibilities and so on. So that opens doors, and of course everything then questions architecture.
Ian: We’re moving from this 2D chiplet ecosystem and now some companies are going 3D - chip on chip on chip. Not only with memory, but with logic, and it is getting extremely complex. I remember speaking to companies when they were first doing it, and they were saying the software tools needs to catch up! How much of a change has it been to deal with vertical stacking, not just horizontal?
Aart: After we have an unbelievable push forward, 10x or 100x, I love it when the users request the tools should be able to do more! On one hand, you wonder if they can acknowledge what we did yesterday for a minute! But at the same time it is fantastic. We like sort of unhappy customers, and the really good ones are unhappy because they suddenly see the next opportunity, but the minute you see it, you're already late.
Luckily enough, we saw this coming a number of years ago and started to invest in that. You're right, this 2D, 2.5D, 3D - but it is all in the same ilk. It is, for starters, proximity and bandwidth, and then of course the speed of that. But also the energy between those, and then the minute you get truly layered dies on top of each other you have manufacturing challenges, and testing challenges, repair challenges, and of course, the closer together the more thermals are your common enemy.
Ian: So how many of those features end up coming from meeting the customers at their demands, compared to things that you guys are inventing internally?
Aart: I think we are much more inventing internally than we were in the late 1990s. Actually, in the late 1990s, they would ask when we were going to make up the design gap. We’re guilty in the fact that we can't just design anything we want, and whilst there was some truth in terms of one could manufacture more things that one could necessarily design automatically, we have long caught up with that, with relatively clear definitions of what you want to get. We can automate pretty much anything. Automating it well is not any different than making transistors well, and if you make them much smaller, they are harder to build - but it’s cool to push, right? Same for us.
Ian: In this era of machine learning, we’re starting to talk to companies about applying ML in the tools for actual chip design. When we last spoke, that was the big announcement of Synopsys’ AI/ML tools to help in that co-design. How has that changed how you’re approaching things?
Aart: A few years ago, I think we were more forthcoming about what we had in terms of being able to really have multiple tools that worked very well together. That's an important point, and out of that be able to take a piece of the design flow, and automate it completely. By the way, because we could do that and have multiple perspectives on the design, simultaneously, we get better results. So this took what was typically a few months of work and turned it into a few week of work. That was remarkable.
Aside from that it had exactly the same characteristics as the synthesis of 30 years earlier - because there too, what happened then was we would go to designers who would work for many weeks on 400 gates optimising the heck out of it. We would take it, and in the matter of a few hours, we’d give something back and it was literally 30% smaller with fewer gates, and 30% faster in terms of the critical paths through it. For starters they couldn't really immediately look at it, because it was a netlist, and if you look at a netlist you can't really see anything, so you have to draw the schematic yourself. Drawing that schematic was a step backwards, and we figured that out pretty quickly, so we developed a schematic generator too, which we hadn't realised at the time was a place and route system. But out of that came something fantastic, because then we could go back, they would go away and for a week we wouldn't hear anything, and they analysed and analysed it, because they couldn't believe it was right. They’d spent so much time on perfecting their design [and we’d done better].
‘There's no way it can do that!’
‘I know how difficult this was!’
‘I worked on it for 3 weeks!’
They’d come back and say it was unbelievable.
But now you have the danger of expectations are too high! But something better happened out of that, which is by being able to look together at the circuit, they would say ‘this is fantastic to get this so quickly’. Then they would say they could done it differently [in this way], and this would be better. It did two things - one, the people that were able to see the results, they were instantaneously experts with us appreciating what we could do, and still their own people were skilled at it. It also made for an instantaneous alliance, because we would take those lessons, put them in. At that time there were still a number of rules you could apply, and come back and say how we fixed it. People would say ‘hey, my stuff is with Synopsys, I’m a bit of a Synopsoid now’. That made for a very great long term evolution of improvement.
Shortly thereafter, we introduced this notion of measuring everything in terms of 3 vectors: quality of results (QOR, which at that time was speed, later became speed and power), time to result (which of course is the ‘weeks to days or hours’), and cost of result (which at that time meant the area or the gate count, but could also be some other factors of needing experts or something like that).
Ian: As in, reducing a team of 50 down to 5?
Aart: If you look now in modern times, it would be more a team of 20 going down to one. So that parallelism existed, but at a degree of complexity that is literally 30 years later. So a full Moore’s Law 30 years later. It is not like we do a whole chip and everything is ready to go, but in very big blocks it is amazing, and so now we have a similar set of questions of looking at very complex chips and asking if you split them and have 2 have simpler ones - or two that then grow to the same complexity. For starters, it is really ‘chip out’ - it is not like board design where you say we have a board and now we bring the chips in from the outside. No, this design has to be from the essence, which is ‘chip out’ because ideally you want the other chip to be just a continuation. The only bothersome thing is the connectors that are slow compared to what's inside.
Ian: When I speak to certain companies who are dealing with this, they’re approaching it from the abstraction layer approach when it comes to chip design, that the layers are independently evolving from each other. Then in comes machine learning, and it is starting to re-blur some of those lines. How are you dealing with that?
Aart: There are 2 types of blurring. There's verification blurring, and there is creation blurring. Verification blurring is old, meaning that you create a model of a transistor and then the level of abstraction is rigid binary, a 1 or a 0. That's an unbelievable level of abstraction when you think about it. Then gradually, we add some physics into it, and one of the biggest distinguishing factors between the synthesis we brought to market is that most people didn't know we had built in timing. Timing is a physical characteristic, it is not a binary 0 or 1. Your 0s or 1s are just not that good all the time. So instead of a step function, it was a continuous function. So the fact is that many verifications you can do by just step-function binary 0s and 1s. If you know it is synchronous, you essentially have simplified the timing too into clock ticks. It is a classic example of a single simplification that has been the best in the world ever.
We have always wondered, why do we need a 0 and a 1, why not just 1? That's sort of boring because nothing changes, so it is almost a minimum simplification that you arrive at. But the interesting thing is fundamentally the software coming down to these tools is also 0s and 1s, and then they get further abstracted into languages or meta-languages and so on - except that if you now say we’re going to have multiple chips coming together, or if that thing is a little heater there, or what does that do to the speed of the other guy, well now the model needs to go deeper. You need to grab some physics. You could say that due to the physics, if this section warms up, the other section runs at half the speed, or whatever – that it is a little too simplistic, but you get the idea.
Ian: Especially when it’s stacked.
Aart: Especially if it is stacked, and then you need cooling, therefore, systemic complexity.
Ian: One of the common complaints of modern chip design, especially on the leading edge, is the cost of bringing these chips to market. Design, manufacturing, testing, verification. Whenever I see the graphs produced by the analyst firms, about the breakdown of the costs, software is a slowly increasing part of that.
Aart: Slowly, as in undervalued? Is that what you meant to say?
Ian: It can be more than 50% of a chip design costs sometimes, depending on who you talk to! Why is that?
Aart: So, what's your cost of going from your home to work? If you use a car, isn't that a lot more expensive than some animal and some little cart?
It’s a speed thing, it’s a convenience thing, and it’s so many things. I mean, what we do is amazing, but the complaint about it being more expensive to do things…
Ian: OK perhaps not complaints, but I’d say observations!
Aart: Aha you made it sound a little more negative at first! But the fact is, of course I understand why. Yes, things are more expensive, but also so dramatically more complex. I would argue any day that if it becomes too expensive, it will slow down. But I would also argue it has actually become cheaper, because suddenly we have a breakout of what you can do with the software. If the only thing you can do is listen to the radio in the 90s, damn that was cool, but now there’s no comparisons to what you can do.
So what is interesting in the whole Sys-Moore era is the silicon-up view, and I want to paint it as an hourglass. The silicon-up view, well that silicon has just moved up because of much more complexity and capabilities. But because it is moved up, it has touched down on all verticals where at the intersections of software it now can impact whatever vertical you're in. When a vertical gets impacted, it touches an unbelievably large amount of economic space, and so if, just for arguments sake, you can make a financial transaction 3% smarter, smarter than your neighbour, you're raking it in.
But maybe another comparison is just you take the industrial age, and the moment machines were able to move things. It opened up a standard of living that had many downsides. We know that, but it also changed the world. For us, this is not a standard of living increase maybe, but it is a standard of insight and understanding that is just remarkable. But what is so interesting is that the big data from all these verticals all being different, and the intersection now with a new forms of computation, called AI, or ML, with the ability to actually do it in a reasonable time. So it is not like winning the chess game is going to take you five years of computation - and that’s an old one, nothing compared to today. Now it’s actually very fast. Now you have the continued push of the Moore part of Moore’s Law, but you now have an economic pull, and when you have a push and a pull at the same time, that brings about amazing change. That is why I’m concluding that we have now a 20-year exponential of demand and needs and so on.
I always talk about ‘techonomics’, meaning that no difficult technology decision is made without understanding the economics of it, because otherwise you can't afford it. So when you ask about expensive software, that’s a techonomic question, and the question is how expensive is the technology versus how valuable is its outcome? In some ways it took the post-COVID environment, combined with the de-globalisation push, for people to appreciate that when you don't have the chip that’s needed for a car door knob controller, that $0.53 chip means a $53,000 car cannot be sold. For the first time people realised these chips are valuable, and there’s a need to control that supply. That is one of the reasons why the semiconductor industry is absolutely going to go to a trillion dollars by the end of this decade. You get what you pay for.
Ian: It’s interesting you bring that up, because I want to get your opinion on the fact that in the automotive industry, just to pick one vertical, we’re seeing so many new automotive CEOs who used to be semiconductor CEOs, because of this desire to have their own secure supply chain and bring everything together. What's your opinion on that - does that continue on what you’ve just been saying?
Aart: So now you have the whole disruption of an industry. That industry has been unbelievably stable, around one modus of locomotion. It has been optimised and optimised and optimised - first at the mechanical and fluid level, and then at the electronic level. Two things are happening and I think they're completely independent - one is the electrification of the car, which is essentially getting off of fossil fuels and so on, and that has some other advantages and disadvantages, but fundamentally it requires an unbelievable change of the global infrastructure of energy. Independently of that comes this notion of autonomous driving. I remember, in these forums I do with 30-40 top executives, 12-13 years ago in Silicon Valley, when they weren't yet called Waymo, but the Google self-driving car, essentially driving itself around. Suddenly it became the ‘this is going to happen’, and I remember asking people in the room how many years it would take to become normal. It fell into two groups – either in two years everyone will have one/have access to one, or the other was ‘around 2030’.
So what is the reality? What we have now is utterly defensive cars, so there’s been an enormous benefit - but are you really going to take a nap at the steering wheel? Or is this one of those ‘when the steering wheel wobbles, it tells you to take over’. And so this level three where it can just pass the steering wheel back, to me, is completely unusable. But that also shows how level 4 and level 5 are extremely difficult. So I tell you that because now most of the automotive companies are figuring out that the decision to go to level 4 or 5 implies re-architecting the very control of the vehicle with enormous smarts built in. There is also a very high degree of requirement to be better and provably better than human drivers. We know it could be better, or perhaps it is already better [than humans today], but it is not very provable. The challenge is who is now driving the essence of the car - the fact that they talk about software-driven vehicles is a very big statement. They (the car manufacturers) spent decades calling themselves ‘metal benders’ in Germany.
Ian: Now they have to be software developers.
Aart: They are now software benders! You can't really bend software too much.
Ian: As I’m interviewing you now, it is December 2023. You've announced that you are to pass the reigns over, after 37 years, to Sassine Ghazi. How does it feel?
Aart: Well for starters, there's a big piece that feels fantastic! Sassine is an unbelievably talented person that also has 25 years at Synopsys, and has had a big impact on many of the things we’ve done. But he has also grown up with starting with a background of being an application engineer, and nothing is better than to understand how painful it is for the customer than when you have to help them with your own software. From there he moved into the sales function, and there he actually was not just a ‘well I’ll just sell you stuff’ [sort of person] but someone who was capable of seeing the situations through the eyes of the customer and what the economic impact is. In that sense it garnered a very high degree of trust. Customers of course always see us as part of the L in P&L, but really it is what can we do to differentiate forward. When you're on an exponential, that's the driver.
Then he became General Manager, made many changes, and pushed very hard for the AI capabilities. Then he spent some time at the corporate executive level, as COO and President. So I have no doubt he is a person that in many ways, and for many new things, he will be better than I could ever be. At the same time that I completely trust to keep his mind on the values, the importance of people above everything else. Now I’ve been preaching in our company (and I'm a preacher from time to time, surprise surprise!), that everyone should always, for themselves or for their team or for the company, should look at version N+1. It’s a very math-oriented way of saying to develop what is next. It is so easy to preach it to someone else, but now I'm sitting here and thinking ‘what is my N+1?’. When you say you're going to just do that, it is not that simple. So my intent is to spend a month of disappearing completely. Part of that is after 40 years of literally non-stop sprints, just to resettle a bit, but also to think through what’s important. It will help me think about what I can do for Synopsys, maybe [what I can do] for the industry.
Ian: You're still staying around as Executive Chairman.
Aart: I am, but you know that's a thing that is purposefully loosely defined. I've been Chair for a long time, so I know what I can do for the board. Maybe I’ll have a little more time to myself.
Ian: It’s when you walk around the company and they won't let you into certain meetings anymore!
Aart: It’s the most difficult thing to learn. I have plenty of years of where I’ve been leading things and moving them forward. But that is learning, there is no doubt. I'm very conscious of it, but I’m now learning the emotional challenge of living up to that. I know well that when you have high and low emotions, your mind is working hard on it - I’ll master that at some point! But in that context, the first thing is to support Sassine, because he has to find his own style.
Ian: You’ve known him for so long - is there any specific advice you’ve given to Sassine?
Aart: I try to not do that. I say try, because it is hard for me to not do it! This is the very thing I need to learn to resist because he needs to, as much as possible, plot his own path to solidify to be clear to all of our employees how he wants to run things. Now, I don't think I can completely resist giving advice of course, but you know I need to learn to wait for him to ask me for it if he wants to. So my intent is to move into a little bit of a different direction, of stuff he wouldn’t have time or inclination, but are yet important to us.
Ian: So looking big picture, 5-10 years down the line - where do you want Synopsys to be? Where do you hope Synopsys to be under Sassine?
Aart: Well, we are in this very unique situation that we’re in the middle of the ecosystem. That brings together the pyramid and whatever the opposite of the pyramid is - we’re sitting in the middle. We’re enabling the computation above, whilst harnessing the physics below. Computation above is becoming way more demanding - and by the way, it is not just computation, but it is also the data. The physics below is also becoming more demanding, and so difficult is our middle name. That’s been Synopsys for a very long time. [Sassine] will be perfectly capable of growing the company - he is very talented in managing the business side of things, and we have a team of technologists that absolutely will nail all these problems one after the other. But you know, we will resolve those things. They have their ups and downs, you have to navigate through those, but you have to keep hiring the best people. The best is not just the most technically adept, but it is people that are fundamentally good people to work with, and so we continue while evolving the culture, it is a part of what needs to be done.
Ian: Thank you Aart for spending time - you have been such a big person in the industry for such a long time I'm glad I caught you whilst you were still CEO! And enjoy what sounds like a partial retirement.
Aart: The question is retiring to what! Retiring is a weird word. The intent is to find the next phase that I’m still looking for impact, but I'm also looking for an impact that has additional meaning because there are so many things in the world that are changing at an enormous speed. Increasingly companies are the new home for cultures and places where people align on the values, and I think we‘ve built something that’s good in that direction. We need to see what we can do to have an impact on the world that stays very positive.
Ian: That’s a long way of saying he’ll still be around!
Aart: Yes, I hope so!
More Than Moore, as with other research and analyst firms, provides or has provided paid research, analysis, advising, or consulting to many high-tech companies in the industry, which may include advertising on the More Than Moore newsletter or TechTechPotato YouTube channel and related social media. The companies that fall under this banner include AMD, Applied Materials, Armari, Baidu, Facebook, IBM, Infineon, Intel, Lattice Semi, Linode, MediaTek, NordPass. NVIDIA, ProteanTecs, Qualcomm, SiFive, Supermicro, Tenstorrent, TSMC.
ncG1vNJzZmiln6eytbTAp6Sop6Kae7TBwayrmpubY7CwuY6pZpqmXZ67tbHRr6Cer12strW0jJqYq6xdmbJus8SuqmadqA%3D%3D