ArchiTecnologia

Aprendizaje abierto. Conocimiento libre.

ArchiTecnologia
EntrevistaLinuxSoftware libre

Jon «maddog» Hall: Exclusive interview for AT

This is another one of the most awaited interviews that I was really looking forward to do, since it is with the great Jon «maddog» Hall. Someone who needs no introduction and who is one of the best known figures in the world of free software.

If you want to know a little more about Jon Hall, and also some scoops about BSD, interesting things about security, RISC-V, LSB, LPI, and much more, I invite you to read the full interview…

Jon Hall

Architecnología: I always ask at the beginning: Who is Jon Hall? (Describe yourself, please)

Jon «maddog» Hall: White, male, bearded (since 1969), fat, old, technical, friendly (most of the time), quiet (except when I am not quiet), home loving (except that I travel about 50% of the time).

AT: When and how did you start being passionate about technology?

J.H.: When I was very young I had a retired next-door neighbor who repaired all sorts of things like jukeboxes, clocks, radios, wire recorders (predecessor to tape recorders). His collection amazed me.

I started reading magazines like Popular Science, Popular Mechanics, and eventually Popular Electronics. I helped my father assemble complex toys (he hated reading instructions) at a toy store.

Later I took three years of electronics shop in high school. We designed and built radio receivers and radio transmitters (analog electronics) and I went to Drexel Institute of Technology to study Electrical Engineering.

While I was at Drexel (now Drexel University) I started programming, mostly by reading books and practicing on my own. Over time I did more and more programming and was good at it. On the other hand my electrical engineering was not doing so good, so I switched majors (and my grades improved dramatically).

Another thing that happened at this time…I became familiar with DEC’s user group, DECUS, and how the users of DEC wrote programs and published them for the cost of copying (often onto paper tape). They freely wrote their code, freely discussed it, and freely shared their sources. I was introduced to “Free Software” way before the GNU project and the FSF, and throughout my professional life I always had access to the source code of the systems I used.

Like Richard Stallman, I liked looking at the source code to see how it worked, and I often fixed bugs in the code I received from other people.

I still collect and repair mechanical clocks, as well as automated musical instruments. Yes, I like tech of all types.

AT: Do you have a reference? Someone who has inspired you?

J.H.: I have many “personal heroes”. Alan Turing currently tops the list along with Rear Admiral Grace Murray Hopper. My mother and father (Mom&Pop™) also inspired me, although neither were very technical.

AT: And why “maddog”? Just Kidding jajaja, Don’t sigh! I won’t force you have to tell it 1001 times. The question is: When/How was your first contact with Linux?

J.H.: My first contact with Linux was from a computer magazine called “Dr. Dobb’s Journal of Computer Calisthenics and Orthodontia: Running lite without over-byte” (yes that WAS the name). It was about November of 1993, and in the back was an advertisement for “a complete Unix system with all the source code” for 99 USD.

I had been in the Unix community for thirteen years at that point, and I knew that AT&T would sue their pants off if they were selling Unix source code for that small amount of money, but….I sent off my money and received a CD-ROM with a small manual that told me how to install it on my Intel PC.

The only problem was that I had no Intel PC. I had DEC VAX systems, DEC MIPS systems, DEC Alpha systems….but no PC (even though DEC made PCs).

I did, however, mount the CD-ROM on my DEC Alpha system and look at the man(1) pages. I was impressed, but I simply put the book and CD-ROM into my filing cabinet.

In May of 1994 when I met Linus Torvalds at DECUS in New Orleans I realized that the thing in my filing cabinet was actually Linux, and when I returned I pulled it out again and looked.

It was the Yggdrasil distribution.

AT: You are Board Chair of LPI, How would you convince readers of the importance of open-source certifications today?

J.H.: There are a couple of different ways of answering this question:

Are certifications important? I think so. When you go to school the teachers determine what you should learn (the curriculum), then they present the information to you, then they test you and eventually you end up with a diploma or certificate. That piece of paper tells the world that you learned a particular topic and were tested on it. You knew it “good enough”.

Whether you take a course or study on your own a certification tells the world that you at least know the information well enough to pass the test. Of course you might also want to get letters of recommendation from your employer or customer.

Secondly, even if you do not take the test or get the certification, having objectives of the test being available online allows you to study on your own. If you read all the objectives and say “Yes, I know all of those” you are probably OK. If you read one objective and have no idea what they are talking about, maybe you should study that objective some more.

Third, we have seen where, everything else being equal, people having a well-respected certification can make much more money than someone who is not certified.

AT: The Linux Foundation has its own certifications for system administrators. Does it make sense to keep the LF and LPI certifications separate? Or a merge could be positive for both?

J.H.: I think having different certifications to choose from is a good thing. LPI also separates our certifications from training. This allows people to get trained any way they want, including self-study. We have partners that do the training for our Certs if people want to take the training, but they do not have to take the training.

AT: When will it be possible to use a Linux distro for exams, and not only from Windows and macOS?

J.H.: Currently LPI’s main test delivery system around the world is VUE. We also offer paper exams at large events. We realize having other ways of taking the tests are desirable.

Jon "maddog" Hall

AT: In 2013 you were involved in a project for porting to ARMv8. What do you think about RISC-V? Do you think it will become a new «Arm» and we will see it in all kinds of machines (PC, embedded, mobile devices, HPC,…)?

J.H.: I like ARM’s business model more that Intel’s, and I like the concept of RISC-V more than ARM’s business model, simply due to its “openness”. I also like the collaborative efforts around RISC-V, and how they are trying to learn from the last sixty years of computer black magic and make a cleaner, nicer system.

Yes, I think we will see RISC-V chips in all the places you mentioned, and even more. The “initiation fee” for getting into developing and using RISC-V is much lower than ARM, so I think we will see many more “makers” and researchers using RISC-V as a focal point.

AT: For LSB, packages should be distributed in RPM format, however, DEB packages are the most popular because of the success of distros based on that package. Should a change be introduced considering what has happened? I mean that standards must also be flexible to adapt to current needs.

J.H.: I would like to see LSB have standards for DEB packages, but I also think the hardest part of creating a standard for GNU/Linux is not the package manager. By the time the developer gets the libraries, compilers and other things correct, the package management is a lot easier, IMHO.

However the LSB is now under the control of the Linux Foundation, so you should ask them.

AT: Hot topics: environment and security. Linux dominates the super-computing sector, and data centers consume enormous amounts of energy. Do you think enough is being done on the software side to optimize and create code that helps manage resources more efficiently?

J.H.: I would ask that you separate the concept of super-computers (High Performance Computing) from server farms. They are two different types of computing and have different needs.

It is my belief that people should learn how the machine works down to the flip-flop, if not the transistor. They should learn some type of assembler language (RISC-V might be a good choice at this point), not to program in it, but to see, from time to time, what the compiler generates. If you code your sources one way, and it produces 50 RISC-V instructions, then you change your code slightly and it produces 5000 RISC-V instructions, perhaps you need to pay more attention to what you are doing.

It is not just whether Google is using 10 GW of power and by optimizing your code for 10% greater efficiency you can allow Google to use 9 GW of electricity, but it is also whether your smartphone lasts for ten hours on a charge or only 9 hours. Both are examples of making your code more efficient.

There is also code that uses way too much RAM, causing paging and swapping which can make other programs running on the same system less efficient. There are many issues of efficiency.

In 1977 I took a program that ran in 10.5 hours on a PDP-11/70 and made it run in only three minutes. The person who wrote the original program know nothing about how the computer worked. I changed the program to take advantage of the architecture.

AT: And, Is enough being done on security issues for today’s new threats?

There is never enough being done regarding security, but there is also the balance between making a system secure and making it easy to use. While certain things may be good for a commercial installation that has professional network and system administrators, “Mom&Pop™” may not have the technical skills to either understand the problem or the knowledge to keep secure.

I also know more than the average person about the security holes in systems, and how they can be exploited, not only by “bad guys” but by governments. I still manage to sleep at night, but just barely.

AT: And there have been comments about the use of Rust in the kernel for security issues. What do you think about that?

I will let the kernel engineers decide what is best for them. I am thinking about learning Rust but I would also like to learn Go.

AT: What do you think will be the next challenges Linux will face?

J.H.: I think the real challenge is to Free Software, not just Linux. Too many companies create products that are “Open Source”, which means that they use Open Source in their products, but do not pass on the changes they make to either the end user customers or the upstream developers. People hear that a company is using “Open Source” and in their minds they think “Free Software”, but there is a difference.

Companies using “Open Source” can create just as closed a product as people who are “up front” about creating closed source. But companies use the term “Open Source” because these days it is seen as a “good thing”.

I used to say “If you are only partially open, it is sometimes worse than being completely closed.” That is still true.

We have to fight to support companies using Free Software. We have to buy software and services from companies that support and ship Free Software.

AT: There has been a lot of comments about «the year of Linux on the desktop». But the truth is that Linux seems to be dominating everything except that sector. I’m not going to ask you the typical question about this, but I would like to know your opinion about the problems that would imply GNU/Linux being the dominant operating system on the desktop: Do you think that a wave of malware could arrive as it happens in Windows or Android?

J.H.: It used to be that Microsoft Windows was a lot easier to break into than Unix or GNU/Linux systems simply because the latter two were built to be multi-user systems on the Internet. Unix and Linux had to have stronger security than a single-user system which not only made them more secure, but more stable. That has changed over the years and I think you could make the case that a modern-day Microsoft System has a level of sophistication in security that rivals that of Unix and GNU/Linux systems.

There is also the arguments of “security through obscurity” and “many eyes see the bugs”, both of which are fairly (almost completely) bogus.

The real argument about security is three-fold:

  • What does the exploiter gain from exploiting a bug?
  • How many levels of systems does the exploiter have to go through? What is the level of difficulty?
  • How long does it take to patch a bug once it is known?

On the first one, if you are a bank, a government facility, or a large body of similar code (I.E. Microsoft Windows) you have a large body of users that one exploit gives a lot of payback. As GNU/Linux is used more and more, the attractiveness of the GNU/Linux target gets stronger

On the second one, the diversity of GNU/Linux may help to protect it. My firewall is an ARM chip, not an Intel chip. Perhaps soon it will be RISC-V. I do not use a “standard” distribution of GNU/Linux for my firewall. All of these things make my system harder to find and to break into. If one GNU/Linux system uses systemd and another uses sysvinit it may become harder for a single exploit to work.

On the third item, the Mean Time To Fix (MTTF) is (to me) the most important difference, when a bug is found, how fast can it be applied and to how many systems.

As an example, if a bug is found in Microsoft you have to wait until an engineer determines a fix. Then they have to create an object code patch for each version of the operating system they support. Then they have to test it. Then they have to distribute it. Then you have to apply it. In the meantime your systems are exposed to the issue.

Perhaps you are still using an old version of the operating system, like Windows XP. Currently (October, 2021) I am told that .6% of the Microsoft users are still using Windows XP. That does not sound like a lot, but if there are 2 billion desktop PC users and 90% of them are Microsoft, that is about 10,800,000 desktops that can be affected by that bug, with no patch coming to them. I am also told there is one country where about 60% of the desktops are still Windows XP.

It is possible that these old systems are behind a firewall, and therefore have some type of protection from attacks. Still it is not good. And I know from a friend of mine that works for the US military that some government agencies in the USA are also still using Windows XP….

Compare that to what happens with GNU/Linux. A bug is found and the source code patch is made available. People can wait for the binary to be made and distributed, which usually happens in a day or two. I can apply the source code patch to my Intel systems, ARM systems, or any other architecture. If I have some older systems I can re-work the patch to fit into the source code for the older systems (if needed) even if those systems are officially deprecated.

This is not just me saying this. A number of years ago a PC magazine gave the award for “best support” to the Free Software Community three years in a row. Then the magazine stopped, because they knew that it would always be the Free Software Community that would win. How embarrassing for some of their advertisers…

AT: I have sometimes asked certain companies or developers: ‘Why is there no support for Linux?‘ And their answer always refers to the variety of existing packages and distributions. Do you think that universal packages have not yet been able to solve this problem? Or do you think the problem is that they are not compensated for the effort for the small share that Linux has on the desktop?

J.H.: When I was working for DEC I would always here from developers about why they would not support our new operating system/hardware combinations. The developers would complain that we did not have the right tools, or that things were “different” or any number of other reasons. This was all a deflection.

The real reason is volume, both the installed base and the rate of new system sales. If you have a huge number of existing systems or (better yet) the rapid sale of new systems (new systems want new applicaitons) they will port and support you.

The other issue is “rate of sales of applications vs the installed base”. If the company thinks that half of the customers of the system will buy their application, then they will do the port. If only 1% of the platform’s customers will buy their application, they are not as anxious to do the port.

Unfortunately GNU/Linux people are not the greatest revenue streams for applications. They do not like paying a lot of money for applications, and this also is affected by the rate of software piracy around the world.

The last time I looked, Vietnam had a software piracy rate for desktop applications of 96%. China used to be 92%, but they have brought it down to 84%. Brazil had a software piracy rate of 84%, and even the United States (one of the richest countries in the world) had a piracy rate of 34%. I got these figures from the Business Software Alliance (BSA) that is supported by companies like Microsoft, Oracle, and Adobe, among others.

I actually hate software piracy. I buy all my commercial software, I obey licenses, I buy all my music (I have thousands of CDs).

When I first went to Brazil I said “You should use Free Software”, to which my Brazilian friends said “Oh, maddog! All of our software is free!” Thus, because they pirated commercial software, Free Software had less “value” for them.

Of course commercial companies and governments can not take the chance of pirating software, so even with a lot of desktop software being pirated, billions of dollars flow out of their country to pay for software where there is a good Free Software alternative solution.

In addition, gamers (unfortunately) had a rather high rate of “piracy”, although gaming revenues are changing from the game itself to services provided by a gaming server.

So if you put all of this together (desktop installed base about 7%, a desktop installation rate of 10%, and a user base that does not buy many desktop commercial products you can see why application vendors do not do a lot of ports.

Servers are somewhat different. People are used to buying a server to run some large application.

High Performance Computers (formerly known as “Beowulf Systems”) tend to write their own special purpose software.

Embedded systems look for a portable, secure, internet ready, efficient, low-cost (free) solution.

Android needed a kernel that was portable, secure, efficient and open. They needed it in a hurry, since Apple was in front of them.

AT: Video games have been another of the problems that have kept many users away from the Linux platform. Nowadays this has changed radically, there are thousands of titles available. What do you think about the gaming scene on Linux?

J.H.: I am not a gamer. The last computer game I played was “Adventure” in 1973 on a PDP-8.

I am happy to see the gaming improve on GNU/Linux because I know a lot of people are gamers and they give “lack of games” as the reason they even bother with Windows at all.

Game consoles also affect this. A friend of mine has lots of game consoles, so does not bother to play games on his desktop or laptop systems.

maddog tomando cerveza

AT: If you allow me some humor in the interview, I think they have tried to tempt you to take the FreeBSD side in some conference, Is that so? Why do you remain loyal to Linux (apart from losing the title of ‘godfather to Torvalds’ daughters’ jajaja). I mean, Why Linux and not some *BSD?

J.H.: I first used AT&T System III Unix in 1980. I learned Unix on that at Bell Laboratories. I stayed at the labs until 1983 and used System IV and System 5.0

When I went to DEC we based our systems on Version 7 (modified) for the PDP-11, but went with BSD 4.1c for our 32-bit VAX version. We stayed on that BSD base until we went with OSF/1 for the Alpha. So I am quite familiar with BSD, and know many of the fine developers that worked (and continue to work) on it.

However in 1994, when I first met Linus Torvalds the USL vs BSDi lawsuit had not been finalized. BSD Lite had not been created, and I needed a free and open operating system to be ported to the 64-bit DEC Alpha. One with source code uninhibited by licensing. Why did I need this? I had VMS. I had DEC OSF/1.

I needed it for computer science research in large address spaces.

Most people do not realize how large 2 to the 64th power is. With a 32-bit computer you can map four billion bytes of data into virtual memory. With 64-bits you can map four billion TIMES four billion bytes of data. That is the equivalent of filling up a one gigabyte disk every second of the day, every day of the year, for the next 584 years.

And if you are working on a movie, or weather, or ocean currents, you may find that 32 bits is too small, but 34 bits is good enough.

When you are using a hash table, and your data is large but your memory is small, you may have lots of collisions, which take time to do “fix up”. With 64 bits of address space the likelihood of a collision is small.

Yes, researchers could do this type of research with licensed, restricted access to source code of a closed source system, but if the researcher wanted to collaborate, and if they wanted to publish their research with source code to demonstrate, they needed a Free Software kernel, libraries and utilities. This is what I was looking for.

So in May of 1994 I found this nice young man, with sandy brown hair who spoke perfect English with a lilting European accent standing in front of me, the leader of the project, and who agreed to do a port to the DEC Alpha.

When I got back to my office I found there were a group of engineers in Digital Semiconductor (where the Alpha was designed, manufactured and sold) that had (independent of me) done an evaluation of non-DEC operating systems trying to find another operating system to port to the Alpha. They had looked at BSD, and rejected it for the same reason I did….the license issue. They had even looked at SCO (the original “good SCO”) and rejected that, for the same reasons.

They finally selected Linux, and had even started doing a port, but with only a 32-bit address space. I told them that was crazy, that one of the main strengths of the Alpha was its 64-bit address space, and they should join with the community to do that larger task. The engineers thought it would be too much work, since it was not only the kernel that had to be ported, but all the libraries, compilers, utilities, etc. had to be ported to the 64-bit space. I told them that work had already been done, because all of GNU, the X Window System, and many other utilities had already been ported to DEC OSF/1, and was already 64-bit clean.

They tested my statement and found out I was right. So they terminated their 32-bit port and joined the Alpha/Linux project. We became good friends and one of them, David Rusling, wrote the Free Software Boot Loader “milo”, which was like “lilo”, only for the Alpha. David went on to become a Fellow at ARM, and the CTO of Linaro, and remains a good friend to this day.

I could see a lot of excitement around creating distributions (SLS, Slackware, Yggdrasil, Red Hat, Debian and more) as well as “Linux” user groups (LUGs) springing up all over the world. I could see multiple magazines (Linux Journal, Linux Magazine, Linux Pro Magazine) on news stands. And much interest in Linux.

Eventually we had a cute penguin as our symbol. BSD had a devil. Yes, I know it is a “demon”, but try telling that to people in Texas.

I would have beautiful women coming up to me at trade shows saying “PLEEEESE give me one of those cute plush Tuxes”. I never had anyone ask me for a BSD devil.

So when the BSD people (finally) came to me and asked me why I did not give as much help to them as I did to “Linux”, my answer was simple:

“Show me three magazines. Show me five LUGS. Show me another distribution other than the three main ones (NetBSD, FreeBSD, OpenBSD) who did not FIGHT WITH EACH OTHER, and I would give “BSD” as much support as I did with Linux.

To be clear, I do not consider BSD to be “the enemy”. Closed source is the enemy. The customer not having control of their software is the enemy. The customer being told when to upgrade, how many users to put on their system, what processor to use is the enemy. Software Freedom is the answer. And this is the problem with “permissive” licenses such as BSD and MIT. They do not guarantee software freedom to the end user.

I know that many people say they need a “permissive” license to make their product, and therefore they need an “Open Source” license other that the GPL. They are simply wrong.

We talk about Software Freedom a lot. To a business person “freedom” is sometimes scary, so I do not talk to them about “freedom”. To a business person Software Freedom means “control of their software”. Anything less is Software Tyranny. And now we begin to see it in Hardware (Open Hardware) and Culture (Creative Commons).

Control. Control of your software, control of your business, control of your life.

And so it was much like my discussion of applications. I had been at DEC trying to produce and sell an operating system (Unix) and a computing life-style (Open Source) from 1983 to 1994 in a company that only wanted to sell VMS and closed source products. I had whip marks up and down my back. I was tired of running the gauntlet.

Just for once…just for once…I wanted to swim down the waterfall, and not up the waterfall.

AT: Thank you Jon for this interview and for the help you gave a few years ago to motivate the students of my Linux system administration course… ¡Hasta la vista!

Isaac

Apasionado de la computación y la tecnología en general. Siempre intentando desaprender para apreHender.

Deja una respuesta

Tu dirección de correo electrónico no será publicada. Los campos obligatorios están marcados con *

Si continuas utilizando este sitio aceptas el uso de cookies. más información

Los ajustes de cookies de esta web están configurados para "permitir cookies" y así ofrecerte la mejor experiencia de navegación posible. Si sigues utilizando esta web sin cambiar tus ajustes de cookies o haces clic en "Aceptar" estarás dando tu consentimiento a esto.

Cerrar