Your single source for new lessons on legal technology, e-discovery, compliance, and the people innovating behind the scenes.

At Citrix, Peter Lefkowitz is Watching Privacy Careers and AI Blossom

Kim Lamba

After 20+ years in the privacy space with companies like GE and Oracle, AI Visionary Peter Lefkowitz is now chief digital risk officer at Citrix. During his career, he says, he’s had the benefit of learning and growing with the corporate privacy space since its infancy. As a result, he’s witnessed a lot of evolution—and has expert insights on what’s around the corner.

You enrolled in law school at Harvard after studying history at Yale. What attracted you to the field of law?

I was the geeky kid in high school reading Louis Nizer’s My Life in Court in my spare time. I had a passion for the law, but without a clue what it would mean to practice. When I started out as a litigator and figured out that short stints in court were accompanied by months of fighting, I moved in-house and found my place helping create, package, and sell technology. I benefited early on from exceptional mentors, including Dan Cooperman, general counsel at Oracle and Apple; and Joe Alhadeff, Oracle’s first chief privacy officer and one of the great privacy strategists.

You started your career as an attorney and subsequently shifted your focus to privacy in the late nineties. Since then, you have helmed the privacy function at GE, GE Digital, and Oracle, among other companies. How has the privacy landscape changed in the last 20 years?

The privacy profession has gone through the equivalent of three major technology releases.

Release 1, in the 1990s, was the policy and public policy expert. The EU Directive had just been released; companies were figuring out how to translate the required controls into policies that could summarize and standardize their commitments; and privacy officers played a key role in talking with privacy regulators, principally in Europe, about how technology and privacy could work together.

Release 2, in the early 2000s, was the crisis responder. California and then others required disclosure of data breaches, and privacy officers in the US responded to constant concerns about data transfers to and from Europe given concerns over US government, and then corporate, access to large amounts of European data.

And Release 3, where we are now, is the risk officer. Privacy pros today worry about maintaining appropriate security in products, services, and supply chains; avoiding zero-day attacks and malevolent nation-state actors; and understanding the limits on use of AI algorithms for everything from cybersecurity and product research to drug development, loan approvals, and advertising. Data and data use are becoming more valuable, the technology landscape is becoming more complex, and privacy pros are keeping pace by understanding and being able to sort and report on how data is collected, used, and protected.

Data and data use are becoming more valuable, the technology landscape is becoming more complex, and privacy pros are keeping pace by understanding and being able to sort and report on how data is collected, used, and protected.

I’m looking forward to Release 4: the privacy pro turned CEO or board member, a move already underway in the privacy technology space.

What role will AI play in this new privacy landscape?

AI has the potential to radically transform our lives in positive, privacy-protective ways. But it also has the potential for significant harm if not well managed. A few examples:

We can use machine learning to figure out that someone with access to sensitive information who normally comes in from a specific location tried to enter from thousands of miles away; from there, we can thwart an attacker.

We also can use natural language processing to find sensitive data and correct, delete, or better protect that information. And we can use AI to accelerate the development of drugs that save lives or detect diseases from haptic response.

At the same time, a misguided algorithm can discriminate by determining that free package deliveries should not be made to certain zip codes, and AI can be used to push people further toward ingesting ever more extreme social media content. A lot of work in the privacy field right now, particularly in emerging regulatory frameworks, centers on figuring out how to differentiate socially beneficial from other AI uses and what measures can be used to set those boundaries.

For several years now, there has been talk of a federal privacy regulation in the US—the American equivalent of the GDPR. What are your thoughts on the likelihood of a sweeping privacy regulation at the federal level?

A comprehensive US privacy law would provide tremendous benefit. It would set a baseline for permissible use and protection of personal information. It would give companies and regulators more certain guardrails. It would bring the US in line with roughly 100 other countries that have created their own privacy laws and facilitate cross-border transfers of valuable data and technology. And it would stop the madness of each state setting its own law to govern information that travels across state and national boundaries over the internet.

Likelihood? Low, currently, for a few reasons.

First, there are a lot of other issues on the legislative and regulatory agenda. Social media and Big Tech antitrust concerns are dominating the discussion in the US. And so, barring a major privacy event, we’re more likely to see legislators focus on things like competition law and Section 230 publication protections.

Second, cybersecurity is a more pressing issue, with zero-day attacks and other cyber risks to personal information stores becoming prominent national and global risks.

Finally, developing a national privacy law is a tough slog. It’s not just about private rights of action and preemption. Europe took years to develop the Data Privacy Directive and many more to develop GDPR. We’re arguably 10 years into our journey following the US Department of Commerce’s excellent draft in 2012, but we still have a way to go in settling some core issues like what is a permissible use, whether consent is an appropriate construct for AI and other advanced uses of data, and more technical issues like use of partially deidentified data. I hold out hope for a comprehensive law, but my money is on the FTC and the states making the most significant progress in the near-term.

Privacy is having a cultural moment—from heightened customer expectations about privacy, several state privacy legislations being deliberated, and the persistent threat of data breaches. Has corporate America recognized the growing importance and concern around customer privacy? Has privacy become a boardroom priority?

Yes and yes. It’s a natural progression. As data becomes more valuable and more consequential to the bottom line, and as companies build their business models around collecting, using, and protecting personal information, executive management responds to that value and risk.

When I first joined the privacy field, we met with management periodically and with the board when there was a bad event. Over time, that evolved into regular meetings with audit committees. Today, Citrix, like many other companies, has a special committee of our board that focuses exclusively on technology, data, and data protection, with board members deeply versed in technology. And we are now part of business management, setting strategic product direction, which is exciting.

You’ve had an illustrious career. What do you consider your most meaningful wins and why?

One of my most meaningful wins was taking the plunge into privacy at a time when many were saying: “But why? You could have had such a good career!” I had the advantage of learning the field from the ground up, when there was just a small group of privacy professionals at companies, constantly helping one another and growing together.

Another was teaching privacy to law school students. A number of my students are now practicing privacy professionals and their skills and enthusiasm are amazing.

What do you do when you are not working? How do you decompress?

During the COVID-19 pandemic, I’ve spent a lot of time with my wife, kids, and dog (a sweet, playful pitbull/lab mix—which we learned from DNA testing that used AI). I enjoy photography, but need one of my sons, an accomplished photographer, to help me out.

Which historical figure do you most identify with?

Mel Brooks. More “aspire to” than “identify with.” He’s a lot funnier than I could ever be.

What do you consider the most underrated quality or skill?

Open-mindedness. Perhaps it’s the current political environment, but we all (me included) tend to come at issues with an eye to how the answer fits our world view. At a time of such rapid change, it is important to start discussions from a place of empathy.


Kim Lamba is a member of the marketing team at Relativity

The latest insights, trends, and spotlights — directly to your inbox.

The Relativity Blog covers the latest in legal tech and compliance, professional development topics, and spotlights on the many bright minds in our space. Subscribe today to learn something new, stay ahead of emerging tech, and up-level your career.

Interested in being one of our authors? Learn more about how to contribute to The Relativity Blog.