cips logo

CIPS CONNECTIONS


Interviews by Stephen Ibaraki, FCIPS, I.S.P., MVP, DF/NPA, CNP

Sean W. Smith and John Marchesini, World Renowned International Authorities/Authors in Security

This week, Stephen Ibaraki, has an exclusive interview with Sean Smith and John Marchesini.

Sean SmithProfessor Sean Smith has been working in information security - attacks and defenses, for industry and government - since before there was a Web. As a post-doc and staff member at Los Alamos National Laboratory, he performed security reviews, designs, analyses, and briefings for a wide variety of public-sector clients; at IBM T.J. Watson Research Center, he designed the security architecture for (and helped code and test) the IBM 4758 secure coprocessor, and then led the formal modeling and verification work that earned it the world's first FIPS 140-1 Level 4 security validation. In July 2000, Sean left IBM for Dartmouth, since he was convinced that the academic education and research environment is a better venue for changing the world. His current work, as PI of the Dartmouth PKI/Trust Lab, investigates how to build trustworthy systems in the real world. Sean was educated at Princeton (A.B., Math) and CMU (M.S., Ph.D., Computer Science), and is a member of Phi Beta Kappa and Sigma Xi.

John MarchesiniDr. John Marchesini received a B.S. in Computer Science from the University of Houston in 1999 and, after spending some time developing security software for BindView, headed to Dartmouth to pursue a Ph.D. There, he worked under Professor Sean Smith in the PKI/Trust lab designing, building, and breaking systems. John received his Ph.D. in Computer Science from Dartmouth in 2005 and returned to BindView, this time working in BindView's RAZOR security research group. He conducted numerous application penetration tests and worked closely with architects and developers to design and build secure systems. In 2006, BindView was acquired by Symantec and he became a member of Symantec's Product Security Group, where his role remained largely unchanged. John recently left Symantec and is now the Principal Security Architect at EminentWare LLC.

The latest blog on the interview can be found in the IT Managers Connection (IMC) forum where you can provide your comments in an interactive dialogue.
http://blogs.technet.com/cdnitmanagers/

Index and links to Questions
Q1   Can you profile how you got to your present position in your career?
Q2   How did you come to collaborate on your recent book?
Q3   Sean and John profile some key lessons on several topics from their book.
Q4   What are five little known but essential tips that can be found in your book?
Q5   The industry is changing. What advice would you give to IT professionals to stay on top of what is happening in the industry in order to position them (from a career standpoint) and their organization to benefit from these trends?
Q6   What key lessons can you provide taken from your current research and work experience?
Q7   In your current role, what are the biggest challenges, and their solutions? How does this relate to business?
Q8   Please share some stories (something surprising, unexpected, amazing, or humorous) from your work.
Q9   Which are your top recommended resources and why?

DISCUSSION:

Opening Comment: You bring a lifetime of proven experience and accumulated valuable insights to our audience. Considering your impossible schedule(s), we thank you for doing this interview with us.

A: You're welcome!

Q1: Can you profile how you got to your present position in your career?

A:   I've always liked to put things together the wrong way; specializing in security was inevitable. A longer answer: I got into computer science in the 1980s because it was a relatively brand-new field. After working out in the real world trying to fit social processes and technology and security together, I came back to academia as an old young professor. Having to teach and work with bright young students takes the rust off.

[John]: Much like Sean, I was one who always put my Legos together in some way other than what the instructions dictated. Plus, I saw WarGames in the movie theater as a kid, and I was hooked. My career has mostly consisted of switching back and forth between security software development and penetration testing. I think that being able to build things as well as break them has made me better at both.

Q2: How did you come to collaborate on your recent book?

A:   At Dartmouth, I had a chance to create a security course; which, for most students, will be their only exposure to the topic. I didn't like the choice of books out there, so I decided to write my own. John was my first Ph.D. student and helped with the course, and his black-hat and industry experience nicely complemented mine - hence, the collaboration.

[John]: Yep, what he said.

Q3: Can you profile the key lessons from your book from each of the topics below?

  • Understand the classic Orange Book approach to security, and its limitations
  • Use operating system security tools and structures - with examples from Windows, Linux, BSD, and Solaris
  • Learn how networking, the Web, and wireless technologies affect security
  • Identify software security defects, from buffer overflows to development process flaws
  • Understand cryptographic primitives and their use in secure systems
  • Use best practice techniques for authenticating people and computer systems in diverse settings
  • Use validation, standards, and testing to enhance confidence in a system's security
  • Discover the security, privacy, and trust issues arising from desktop productivity tools
  • Understand digital rights management, watermarking, information hiding, and policy expression
  • Learn principles of human-computer interaction (HCI) design for improved security
  • Understand the potential of emerging work in hardware-based security and trusted computing

A:   That's quite a list!

  • The Orange Book: smart people spent a long time thinking about how to construct operating systems that meet some specific security goals, with high assurance. There's a lot a modern architect can learn from this material, even if goals and technology have changed. Of course, there's a negative lesson too: as a public policy tool, it failed to produce the desired response in the marketplace.

  • Computer systems have become too complex to think clearly about. As a consequence, it's easy to get them wrong - and what's worse, it's hard to even specify exactly what the "right" behavior was supposed to be. These problems lead to many security holes in software. It's like thinking up puns or putting things together the wrong way: the adversary provides input that complies with the basic rules of sense, so it is accepted by the system, but also has completely subversive and unexpected semantics, so it tricks the system into entering a dangerous state, unexpected by the designers.

  • Cryptography makes it possible to do all sorts of magic, such as hiding information from an adversary who doesn't know a secret, or convincing a remote party you know a secret without revealing what the secret is. Consequently, it's central to security in modern, networked computing environments - so one needs to understand both its foundations and its practical aspects, if one is to understand security.

  • For many of us, computers permeate every part of our daily lives. (Case in point: you sent these questions as a Word document, with macros!) As a result, security issues can affect our daily lives. For non-specialists, discussion of these topics can seem too abstract to be relevant - which is why it's fun (and enlightening) to discuss them instead with explicit examples and war stories from everyday office tools.

  • Computation doesn't make much sense without humans; even an embedded system out in a remote power substation requires a human to configure it, design it, and remember it exists! Thinking about how humans interact with computers, and how this interaction can complicate or simplify the security problem, is a promising area.

  • Since computation happens on computers, it only stands to reason that changes to the computer hardware affect properties of the computation - including how hard or how easy it is to secure. Researchers have speculated about this for a long time; however, in recent years, we're seeing vendors roll out a lot of new ideas here.

  • OS tools - The line that separates the OS from applications is constantly blurring (just ask the US Dept. of Justice or Microsoft). The more that distinction goes away, the easier it is for security problems in applications to cause real security trouble for modern OS.

  • Networking, Web, wireless - Funny things happen when you let computers talk to each other. For years, the network has been the battlefield on which the security war has been waged: remote exploits, firewalls, intrusion detection systems, etc. As some new networking technologies become more prevalent, we expect to see new types of attacks and defenses.

  • Auth - Authentication typically serves as the foundation for a security system. Complicating matters is the fact that trying to build a good authentication system is hard. We hope that by examining what's out there, those trying to build such systems will learn what has worked and what hasn't and apply that to their own designs. (A pet peeve of mine is that too often, authentication techniques seems to be designed to meet the requirements of the technology, not the human system it's supposed to serve. But that's a topic for research.)

  • Validation - Let's say you've built some sort of secure system and you're ready to market it, give it away, or whatever. How do you know it's really "secure"? Testing and the validation process is a good way to systematically check for security trouble and communicate the results to others.

  • DRM - The more that computers are used to translate data into something we like to read, watch, listen to, or use, it seems the more that the creators of that data would like to control how and when the data is used. The field of Digital Rights Management, etc. theoretically gives content producers a way to protect their data from "inappropriate use" by content consumers. Some of these schemes technically work, but don't socially work, and vice versa.

Q4: What are five little known but essential tips that can be found in your book?

A:   A few that come to mind:

  • Formal methods tools, such as model checkers, can go a long way to slaying the dragon of software complexity, and the security bugs that come from this complexity. And we're on the cusp of seeing them ready for prime-time.

  • In practice, public-key cryptography almost never relies solely on the public-key algorithms - and the security trouble usually lies in this other stuff.

  • Side-channel attacks are a wonderland.

  • [John]: Modern software is usually built by smart people trying to do the right thing. So, why does it often go so horribly wrong? At least part of the answer lies in the toolkit and processes that the industry uses to get software to the market.

  • [John]: Much security trouble stems from the fact that systems are often difficult to use and encourage users to make honest mistakes that put the system in a bad state.

Q5: The industry is changing. What advice would you give to IT professionals to stay on top of what is happening in the industry in order to position them (from a career standpoint) and their organization to benefit from these trends?

A:   We're going to see a fundamental change in the underlying hardware foundations: increased reliance on hardware-supported virtualization, increased permeation of TPM-like hardware, more CPUs (thanks to multi-core) than you can shake a stick at. These trends may turn conventional wisdom upside down.

Boundaries are disappearing. With outsourcing, corporate acquisitions, and regular job-changing, well-defined notions of "inside" and "outside" and "perimeter" are disappearing. I might even go further on a limb and predict that, with things like Web 2.0 and Web-based "office tools" and MySpace and Facebook, the well-defined notions of "application" and "data" are also disappearing.

Computers are getting smaller and going everywhere. It's cliché now, but I don't think it will be long before my refrigerator sends me a text message telling me I need to get beer and the GPS in my car automatically calculates the route to the nearest market. While this is great from a convenience standpoint, it surely poses a host of new security challenges.

Regulatory compliance is driving much of the current security spending, at least in the US. Some of these regulations and best practice frameworks are vague, and require interpretation to become useful to IT staff; others are more specific. In any case, some knowledge of these regulations is useful to anyone going into IT security or is planning on developing software to be used in a regulated environment (e.g., a publicly traded company).

Q6: What key lessons can you provide taken from your current research and work experience?

A:

  1. Never be afraid of crossing expertise boundaries when pursuing the truth. The system is a system; you can't pay attention to the signs that say "you must be THIS qualified to go past this boundary."

  2. Wisdom is where you find it. In computer science (in general) and computer security (in particular), we see a balkanization: academic disciplines, academic vs. industry, formal vs. hacker, offense vs. defense. But if you confine yourself to one silo, you needlessly cripple yourself.

  3. Curiosity is key. One of the most important lessons I've taken from my research and work experience is that not having answers is OK, not having questions is the problem. I think it's important to constantly challenge the assumptions of the system and ask "what happens if…." Remember that it's much harder to build something than to break it. While these are complementary skill sets in my opinion, it's easy to find a security problem and criticize the system designers. An attacker can hit and miss; each time a designer misses, security trouble follows.

  4. Security is relative. Anyone advertising a "secure system" surely has some implicit assumptions about what type of attacks the system can withstand. Saying that a system is secure in an absolute sense is just about meaningless, and you'd have a hard time proving it.

Q7: In your current role, what are the biggest challenges, and their solutions? How does this relate to business?

A:

Challenge one: Computers start at the bottom with things like transistors and logic gates, and build up all the way to social processes and psychology and legal issues. It's important to be able to move between these abstraction levels; if you're working primarily on one, it's important to be able to project its implications above and below. However, in my life as a professor (my third career now), it's getting more common to see students who have trouble with this projection, with things like relating their high-level program to machine behavior. Their challenge becomes my challenge!

Solution: I'm still working on this!

Challenge two: Building a software product typically involves getting the most features to the most customers in the shortest amount of time possible. The way we build software is built around this paradigm. However, this approach is in direct opposition to building secure software. If security were the most important thing on the list, we'd axe features that were risky, spend more time in design and code reviews, and do much more negative testing. My job often involves trying to balance these two approaches.

Solution: Well, there isn't any hard and fast rule here; each project is different. Ultimately, it often boils down to a risk management decision.

Q8: Please share some stories from your work.

A:   Boundaries are where the trouble is. In my industry work, whenever we drew a boundary between the "hardware functionality" and the "software functionality" it was wrong - we ended up wrestling with subtle interactions that did not pay attention to our naïve distinction.

As a student, when your code doesn't work, it's easy to blame the system and the tools instead of yourself. However, in the real world, I was surprised to see that, now and then, it really was the system. The tools themselves didn't work!

In my consulting days, I would often approach a project with a clear idea of where the security issues were - only to discover that, once I started talking to the clients and learning the broader context, my initial view was wrong (or at least overly simplistic). In these engagements, I learned a lot (on a meta level) from a retired Treasury agent who was on our team - and who was probably the smartest "people" person I've ever known. That helped make it easier to see the client's context. School doesn't spend enough time on that sort of thing.

Q9: Which are your top recommended resources and why?

A:

  1. Slashdot: a good way to keep up with the latest applied systems work (and related legal and public policy developments)

  2. The RISKS Digest: it's good to be continually reminded of broader, real-world consequences of misapplied information technology.

  3. The crypto mailing list: some noise, but a good way to stay abreast of applied cryptography developments - and some good discussion and insights.

  4. The BugTraq list. People break systems every day, and often, using old tools and techniques. This is a good place to see how people are breaking systems on a daily basis.

Closing Comment: We will continue to follow your significant contributions. We thank you for sharing your time, wisdom, and accumulated deep insights with us.