Tiffany Trader
Is the parallel everything era here? What happens when you can assume parallel cores? In the second half of our in-depth interview, Intel’s James Reinders discusses the eclipsing of single-core machines by their multi- and manycore counterparts and the ramifications of the democratization of parallel computing, remarking “we don’t need to worry about single-core processors anymore and that’s pretty significant in the world of programming for this next decade.” Other topics covered include the intentions behind OpenHPC and trends worth watching in 2016.
HPCwire: Looking back on 2015 what was important with regard to parallel computing? And going forward, what are your top project priorities for 2016?
James Reinders: I’ll start with my long-term perspective. I was reminded that it’s been about a decade now since multicore processors were introduced back in about 2004. We’ve had about a decade of going from ‘multicore processors are new’ to having them everywhere. Now we’re moving into manycore. And that has an effect I don’t think a lot of people talk about. Ten years ago, when I was teaching parallel programming and even five or six years ago, there were still enough single-core machines around that when I talked to people about adding parallelism, if they weren’t in HPC, they had to worry about single-core machines. And I can promise you that the best serial algorithm and the best parallel algorithm — assuming you can do the same thing in parallel — are usually different. And it may be subtle, but it’s usually enough of a headache that a lot of people were left (outside of HPC) having to run a conditional in their program, to say if I’m only running on single-core let’s do this and if I’m in parallel, let’s do it in parallel. And it might be as simple as if-def’ing their OpenMP or not compiling it with OpenMP, but if you wanted it to run on two machines, a single-core and a multicore, you pretty much had to have a parallel version and a non-parallel for a lot of critical things because the parallel program tended to have just a tiny bit of overhead that if run on a serial machine it would slow down.
So I ran into cases 8-10 years ago where someone would implement something in parallel and it would run 40 percent faster on a dual-core machine, but 20 percent slower on a single-core because of the little overhead. Now we don’t have to worry about this anymore, even when I go outside of HPC – HPC’s been parallel for so long – although the node level is kind of doing the same thing in this time period. Seriously, I was in an AT&T store and they were advertising that they had quad-core tablets and I was laughing. There’s also some octo-core things now in that world. I just laughed because it reinforces my point that we don’t need to worry about single-core processors anymore and that’s pretty significant in the world of programming for this next decade, that it doesn’t hold back the stack – we don’t have to program twice in any field anymore. We can just assume parallel cores. I think that that’s a big deal.
I’m a big believer in the democratization of parallel programming and HPC. I think that we keep seeing things that make it more accessible, one of them is having parallel compute everywhere, the other is advancing tools. And I think we saw a couple of things introduced in 2015 towards that democratization that combined with the fact that everything’s parallel is going to be transformative in the upcoming year and decade. A few of them are big things that I was involved with at Intel. One was that we’ve had a pretty successful foray into promoting code modernization. To be honest, I wasn’t so sure myself because I’ve been talking about parallelization so long, that I thought everyone was listening. I think there’s a lot of dialogue left to happen to truly get all of us to understand the ways to utilize parallelism. In our code modernization efforts, we’ve had things ranging from on-site trainings and events to online webinars and tools and they’ve been extraordinarily popular.
I’m also very excited about OpenHPC, and my perspective is as I visit all these different computer centers and I get the wonderful opportunity to have logons on different supercomputers around the world. I can use systems at TACC and Argonne and CSCS in Switzerland and many others, and they all solve similar problems. They all bring together, for the most part, all these different open source packages. They usually have multiple compilers on them; they have a way to allocate parts of the machine. They have a way to determine which version of GCC you’re using along with which version of the Intel compiler, etc. They all solve the same problems, but they all do it differently. There are quite a few people like myself that have logons on multiple supercomputers. So if you talk to scientists doing their work, lots of them have multiple logons and they have to learn each one, but that also means that they aren’t sharing as many BKMs [i.e., best-known methods]. There’s a lot of replication and when you look at the people that are supporting your supercomputer, I think there’s a lot of opportunity to bring more commonality in there and let the staff that you have focus on higher level concerns or newer things. So OpenHPC really excites me because it’s bringing together packages much like these centers already have, and validating them — leaving it with the flexibility that you can pick and choose, but at least giving a base-line that’s validated that they all work together — give a solution to these common problems, even have pre-built binaries.
And I’ve had the good-fortune to sit in on the community sessions, people in HPC that have been debating, and it’s interesting because there have been some pretty heated debates about what are the best way to solve some of these problems, but at the end of the day they may come up with two solutions to a problem or maybe they’ll pick one that’s best. But then they’re kind of solving it industry-wide instead of one compute center at a time, and I think that’s going to help with democratization of supercomputers of HPC. So that got off the ground in 2015, and I think 2016 will be very interesting to see how that evolves. I expect to see more people join it and I expect to see a lot of heated debates about what the best way to solve something is. But these are the sort of debates that have never really happened before because one compute center can have an argument with another computer center what the best way to do something is and they can both go off and do it differently. OpenHPC gives them opportunity for the debate to happen and then maybe stick with one solution that both centers or lots of centers evolve.
I’m also really excited that we got three Knights Landing machines deployed outside of Intel. In 2016, we’ll see that unfold and there is enormous anticipation over Knights Landing, I think it’s very well justified because I think taking this scalable manycore to a processor is going to be a remarkable transformation in the parallel computing field with a very bright future ahead of it.
HPCwire: With regard to OpenHPC, do you expect that more wary associates like IBM, which has been pushing the OpenPOWER ecosystem so strongly, would also be a member? We talked with them and they said they were looking at it for pretty much the reasons you’ve outlined. What are your thoughts about membership?
Reinders: You know I can’t speak for IBM or predict what they are going to do but I do think that the purposes of OpenHPC, the problems they’re solving, would definitely be beneficial to IBM and quite a few other companies and centers that haven’t joined yet. I think a lot of people learned about it at supercomputing so that’s not a surprise. The goals of OpenHPC is certainly to be a true open community group. So the Linux Foundation – it’s their thing, we are heavily involved obviously, and they need to come up with the governance models and so forth, but I can say that it would be an extreme disappointment if it wasn’t open enough that everybody felt welcome enough to come participate and benefit from it. So I certainly hope to see them participate in 2016, but I think that ball is in their camp.
There’s been some dialogue or debate about whether OpenHPC is Intel’s answer to OpenPower and I don’t think that is the right way to look at it. OpenHPC has the opportunity to bring the entire industry together as opposed to be partisan to one architecture or another. Now that said, our heavy involvement in OpenHPC getting started means that we did what we do best which is our best effort at making sure that there are recipes already written up for our architecture, but hopefully they weren’t written in a way that you can’t just go write one for POWER or any other architecture. That wasn’t the goal, but frankly we’re not experts in other people’s architecture. So hopefully, what we’ve done we’ve left open enough so someone else can come in and if they want to invest effort for their own architecture, do so. It didn’t include the specification of a microprocessor in its design or anything, so it’s definitely different than OpenPower in that respect.
HPCwire: To recap, what are the top five things you are looking forward to in 2016?
Reinders: The top one to me is Knights Landing getting more available beyond the three systems that are out there. I think that’s going to be huge. Having a manycore processor instead of a coprocessor is going to fuel a lot of interesting results and debates, which I think will be great. I think OpenHPC is going to be very interesting, watching how that evolves. Nothing comes for free, so it’s going to be up to the folks that show up to the table in the community contributing, but I think that will be very significant during the year.
The other two areas I look forward to seeing evolve this year are code modernization and big data. I like seeing how we can get better and better at explaining the benefits of parallel computing to a broader set of users and to the users that you already think are doing parallel programming. I think code modernization will continue to stay on the docket as a very important dialogue. And then I think that big data, including data analytics and machine learning, will continue to see very significant developments with more nitty gritty work going on. There have been a lot of demonstrated kernels and some interesting work done, but this year the ramp-up is going to continue very fast. The interest in big data and what it can do for companies is very significant and I think we’ll continue to see a lot of things pop out there.
The other thing I’m following closely is the shift of visualization to the CPU. We’ve had some really interesting work in that area. There’s kind of been an assumption that when you’re doing visualization that having a specialty piece of silicon or GPU to do the visualization must be the answer, but it turns out GPUs are focused on the sort of visualization you need to do to display on the screen, the rasterization. A lot of visualization work is going on on supercomputers and machines where there are a lot of of benefits to not rushing the rasterization so quickly. In particular ray tracing, we’re seeing a lot of use there where ray tracing is clearly much better on the CPU, including Knights Landing. It’s been interesting watching that surprise people. There are a lot of people in the know that are doing visualization on CPUs and finding much higher performance for their purposes. I’ll go out and add that to my list since you asked for five things. I think in 2016 there will be more aha’s and realizations that visualization is increasingly becoming a CPU problem.
HPCwire: What are your thoughts on the National Strategic Computing Initiative to coordinate national efforts to pursue exascale and maximize the benefits of HPC, and should there be more investment in national research centers for software?
Reinders: I do like to point out that hardware is meaningless without software, so yes the software challenges are substantial. If I had any say in it I would encourage us to worry more about the connection of software to the domain experts rather than in a pure computer science fashion. I think there is a lot of interesting work going on in that space. If you had such centers, I would think of them as being applied science, and I think that would be an area of applied science that would be very useful.
As for the National Strategic Computing Initiative, how can I not love it? I think the fate of nations rests on their ability to harness compute power. There’s no doubt about that. Whether we want to be so dramatic to call it a battlefield, it is definitely an area competition. As an American, I’m very glad to see my country not missing that point. I just got back from India, another large democracy, and they are having very similar discussion in their country and they are rolling out their initiatives. Every country has to consider the role that computing, especially high-performance computing, plays in the competition of their nation. The US has been so long a leader in this area, I think our dialogue is about how to continue to lead the world by our own activities.
This was the second part of a two-part interview. To read the first half, where Reinders discusses the architectural trade-offs of Knights Landing’s manycore design and offers advice for expectant users, go here.
Source