The Flawed Economic Analysis Underpinning SEC Rule Proposals

One-on-One Conversation with Joe Seidel and Craig Lewis

In this episode of The SIFMA Podcast, SIFMA’s Chief Operating Officer Joseph Seidel sits down with Craig Lewis, former Chief Economist and Director of the Division of Economic and Risk Analysis, or DERA, at the SEC, to discuss the SEC’s approach to economic analysis as the basis for rule proposals.

Transcript

Edited for clarity

Joe Seidel: Hello, thanks for joining us for this episode in SIFMA’s podcast series. I’m Joe Seidel, SIFMA Chief Operating Officer, and I’m joined today by Craig Lewis, Madison S. Wigginton Professor of Finance Emeritus at the Owen Graduate School of Management at Vanderbilt University. Craig is also a former Chief Economist and Director of the Division of Economic and Risk Analysis (DERA) at the U.S. Securities and Exchange Commission (SEC), and joins us today to talk about the SEC’s approach to economic analysis as the basis for rule proposals.

Craig, thanks for being here today. To kick off our discussion, can you talk a little about DERA’s role in conducting economic analysis at the SEC? What are the guidelines and the goals of the division?

Craig Lewis: Great, and thanks for inviting me to participate in this podcast. So the role of economic analysis and sort of what should be DERA’s approach to economic analysis is one part of the role of DERA. So I think maybe start out just talk a little bit about DERA and then specifically jump to the economic analysis.

One function that the economists have at the SEC is to construct the economic analyses that accompany rulemakings. There’s another part, and so think of that as more of the regulation disclosure aspect of the Commission’s mandate. The other thing that the division does is it also supports enforcement, either by building risk assessment tools that help enforcement or by actually participating in litigation efforts by the Division of Enforcement. We’re going to focus on the first primary function of the division today, which is rulemaking and economic analysis and rulemaking. And when I think about the role of the economists in conducting rulemaking, the approach here should be, it should be an objective analysis that is designed to inform the Commission about what the costs and benefits of the rule actually are. The way you do that is you begin by actually identifying why there’s a need for regulation. If you think there’s a need for regulation we typically call that a market failure. What is the specific market failure that the rule is trying to address? Once a market failure is identified the next step of an economic analysis is to build the baseline. And the baseline is designed to be kind of the commission’s view on where markets are today, sort of pre-regulation. That includes not a sort of a discussion of what is going on in the market, but to the extent possible, try to support that with economic, sort of a data analytic approach.

Part of that analytic approach should be to provide empirical evidence that there actually is a market failure. Once this market failure has been identified, the baseline has been established, the next step of the economic analysis is what most people actually think of when they hear the term economic analysis, and that is the Commission lays out what it believes the costs of the rule would be as well as the benefits that will be derived if the rule is actually finalized. That discussion is designed to first take place at a qualitative level, complete description of what you think the costs are, what the benefits are and then to the extent you can, you then try to quantify what those costs and benefits of the rule actually are. The real objective here is to quantify as much as you possibly can. But in many of the Commission’s rules, that’s a very challenging task.

So, when we have this document that was developed while I was the Chief Economist, which is called the Guidance, and that is the documents published on the SEC’s website, that is sort of the roadmap for blueprint, you will, for how economic analysis is supposed to be conducted. The Guidance says that when you’re unable to quantify something, you are expected to explain why you’re unable to quantify and be specific. I think the Commission sometimes tries to get away with just saying we can’t quantify it and relies on boilerplate without being specific to the particular rule. But maybe we can dig into that a little bit then once those benefits and costs are identified, typically, I think it’s going to be something that isn’t so obvious, right? If you’ve done a good job laying out the costs and the benefits, each commissioner gets to put those costs and benefits on their own personal scale and decide whether they believe the benefits outweigh the cost and if you can justify that, then you would vote for a particular rule proposal.

The next, the last part of an economic analysis is to identify reasonable alternatives, right? And then explain why you didn’t actually propose one of those reasonable alternatives. The idea is you would go through all the possible alternative ways to achieve a certain regulatory outcome. Then you would say, well, which one of these do we believe is the best? And we’ll pick that one. So this reasonable alternative section is designed to simply be the commission showing its work that we did think about alternatives and these are what we thought about and this is why we didn’t take them. So that’s how I would answer the question, Joe.

Joe Seidel: So Craig, you were the first economist to head DERA and those guidelines you established, they have been in place since that time and sort of been uniformly interpreted, sort of in the manner you suggested. Why do you think it seems during this sort of era, the Commission seems to be at least playing a little bit fast and loose with some of those items on your checklist there, and sort of how do you think that affects sort of the quality of the rulemaking then in terms of the support for it. Obviously, they’ve had a little bit of trouble in litigation. What do you think that ultimately causes in terms of the quality and sustainability of their end product?

Craig Lewis: Great. So I think that the most significant issue that I have with a lot of the rules that the SEC has proposed during sort of during the Gary Gensler’s leadership is they, I believe, have failed to articulate an actual market failure. They seemed in some sense to be more politically motivated than economically motivated. So, what is the problem you’re trying to solve? Frequently, I think the explanations that the commission provides are pure conjecture. They fail to demonstrate empirically that there actually is a market failure. And I think it should be, if you believe there’s a problem in markets, it should be fairly simple to demonstrate at a data-driven analytic level. What’s the problem? Who’s being harmed and why would this particular rule address, that problem? That is really to me kind of those two things, the lack of a real way to identify a market failure or a problem that you’re trying to solve and then the inability to actually empirically support that.

Joe Seidel: Yeah, it seems like traditionally it’s more of a bottoms-up approach, a market failure, a market problem develops and the economic analysis is part of the case for it as it goes up of the commission. It seems we may be in a little bit of an era of a top-down approach where problems are specified almost as hypotheses and then the support needs to be sort of generated it sort of seems in that way that we’re at a bit of a critical crossroads regarding the execution of the robust economic framework. Can you share your thoughts on that?

Craig Lewis: So, yes, I tend to agree with you. I think maybe jumping a little bit ahead, the idea is, I know I’ve said that the Commission seems to be pursuing regulation for its own sake. I think I’ve been quoted as saying that Chairman Gensler seems to be not as concerned about having rules vacated as his predecessors. When I was at the Commission, Chairman Shapira, Chairman Walter, and Chair White all felt that it was important that if a rule was finalized, that it wouldn’t be successfully challenged. A lot of that is sort of, I think, the aftermath of having some of the rules vacated in the D.C. Circuit Court, Proxy Access was the one that was the most relevant to me because it was vacated about a month into my tenure as Chief Economist at the SEC. And honest, that was the catalyst for the development of the economic analysis. Say what you will, but it was a legitimate serious effort to try and say if we’re going to have our rules vacated because the economic analysis was deemed arbitrary and capricious, that’s something we should try to solve internally. And the guidance is, I think, that effort to do that. And if you follow that guidance rigorously, should have a fairly robust rulemaking. And I believe all the three chairs that I served under believed that. And, so that gave me the opportunity to provide what I felt were very objective analyses.

Like I said before, I just thought you had to, you should be confronted with the costs of the rule. Just because you don’t like the outcome doesn’t mean you shouldn’t have to at least be exposed to those and form your opinion with complete information. And so where I think some of this has gone astray is this top down approach that you’re talking about. It appears that the rules, the economic analysis, and the rules have taken on more of a supportive role than an objective assessment role and I believe that when you start to draft your economic analysis to support a rulemaking, then you’re not applying the guidance anymore.

Joe Seidel: So let’s then dig into some specific rulemakings. In a recent guest blog post you did for SIFMA, you focused on the economic analysis and the SEC’s regulation best execution and you concluded the SEC had not adequately justified the need for new regulations. What in broad strokes did they miss in their analysis in your view?

Craig Lewis: So Broadstrokes, what I think they did was once again, they really do not articulate the problem. What is it that you’re trying to solve? Why do we need this regulation? So that is something I keep coming back to, I believe that they have not really demonstrated a true market failure. From my perspective, the market for retail investing is better than it’s ever been before and so applying a lot of rules to a space where markets seem to be working quite well doesn’t seem to be an efficient way to allocate your resources if you’re at the Commission.

There are plenty of other things to worry about, sort of picking on a markets, space, a regulatory space where things already seem to be functioning well doesn’t make a lot of sense to me. So it’s that problem of identifying a market failure. What are you trying to solve? Let’s see and then show us why it’s a problem. So I know a little bit repetitive from what I said before, but I do think that that is what the SEC has kind of missed. And in order to do that, I believe that there have been some analyses that could have been performed that would have been much more convincing than the ones that they actually provided.

Joe Seidel: So you focus on the two key analyses in your blog, the midpoint liquidity analysis and the execution quality analysis. Let’s take a look at those separately. What did they seek to do and how, in your view, were they flawed?

Craig Lewis: So what they were trying to do, think fundamentally there’s this idea that wholesalers provide, let’s go back and think about the way the market works today. Basically, retail order flow has largely been routed now away from exchanges to wholesalers. One of the reasons that works is because retail investors tend as a group to be less informed than institutional sophisticated traders. Wholesalers understand that that order flow is less toxic and because of its lack of toxicity, they’re willing to provide price improvement. And they can, because they’re not on an exchange, they can offer better prices that are inside the bid-ask spread. And they can quote it less than a penny. So the Commission’s point is that, if we’re going to accommodate this type of price improvement, the Commission believes that the level of price improvement is insufficient, that retail investors could and should be receiving more price improvement on their orders. So that’s kind of the underlying goal, I think, for these rules.

So with the midpoint liquidity analysis, what they’re doing is they’re basically saying, if you were to ask what is the most efficient price that any investor should be expected to receive, it would be at the midpoint between the bid and the ask. Because if a broker executed every single trade at the midpoint, there’s no room for a broker to make profit. So it’s basically not in, let’s face it, brokers market making is a for-profit business, right? So, but there are a lot of times when a broker is willing to offer midpoint liquidity. In fact, the midpoint analysis that the SEC conducts shows that 45 % of all orders are actually executed at the midpoint or better. So if you can offer those type of executions, what they’re essentially saying is, well, we think maybe more. So the problem I have with this is, well, what is the optimal rate? 45% seems like a pretty big number to me. So what is the number that you think they should be providing? It can’t be 100%. But there’s no guidance in terms of, what is the desired outcome? How much do we think we should achieve? The same thing is largely true for, well, what is the appropriate level of price improvement that the SEC believes? It’s one thing to say we think it’s inadequate. The SEC should maybe be a little more direct in saying, we think price improvement levels of this much are what we have in mind so that market makers can make a choice as to whether that’s something that they believe they can stay in business and be profitable.

So that’s the problem I have with that. And the big limitation with their analysis, really, is that in addition to pointing out the levels of midpoint executions are fairly high already. The one thing they don’t control for is they go through an analysis using the, using consolidated audit trail data, which we’ll, which we call a CAT. So they identify situations where a wholesaler executes away from the midpoint when there was actually midpoint liquidity available on either an exchange and a hidden order or at an ATS. The problem that I have with this analysis is that much of these hidden orders actually have quantity constraints associated with them. And they don’t try to control for the fact that maybe that liquidity was something the wholesaler actually tried to tap into, but couldn’t because the orders that they were executing were too small. It seems if you want to do this analysis, you need to control for the specific features of those individual orders. They’re using CAT data, so you might think that they could do something with that.

Joe Seidel: Yeah, so a couple of things there. One, just sort of maybe it’s a little bit of a tangent, but on CAT data itself, I mean one of the problems with all of this stuff from an industry point of view is the availability of CAT data or the lack of availability of CAT data. I’d be kind of curious on your views in terms of the need in term, in a rulemaking to have data freely. I think the court cases say quite clearly that in various APA challenges to rules that it’s very important for an agency to make data publicly available. I’d be kind of curious on your views as as a former data maven at the SEC, what your thoughts are on that?

Craig Lewis: So the CAT was something that was being proposed and put together while I was Chief Economist. It actually didn’t go live until I came back to Vanderbilt. So one of the things that is important here is if you’re going to use CAT data to build your economic analyses, there needs to be a show your work component to that. If you’re not going to make CAT data available, it’s hard for commenters to actually render a view on whether the analysis that it’s actually conducted by the SEC is accurate and complete. Given some of the problems that I pointed out with some of their other analyses, there’s a reason to think that I would like to be able to basically replicate the SEC’s work.

So how could you do that? Maybe you don’t want to give that data to everybody in the market, right? I’m sure there are lot of market participants that really don’t want to have somebody in a position where they could reverse engineer alpha strategies by having that data. So are there ways to anonymize the data to provide just the data you need to replicate the analysis? There’s a huge academic community out there. Could a commenter enter into an agreement with the SEC, like an independent third-party academic, who would then sign an NDA and simply engage in the process of replicating the data? You would give that individual access to the CAT data, they could go in and replicate the analysis and say, yeah, I’ve looked at this they did a good job. I think that would be one way to get around some of the concerns about, you know, what you would reveal to everybody if you made this data available. But I do think there would be ways to actually allow the SEC to show its work. It wouldn’t be an economist the SEC would pick. Somehow it would have to be an economist that someone else would identify.

Joe Seidel: In terms of the analysis they do, is there anything you found in their analysis that suggests that a new regulation is going to be superior than the existing best execution rule that already exists? For whatever sort of mathematical exercise they did, I wouldn’t say so what, mean, it is what it is. But I mean, again, in terms of the current rule and the proposed rule, you know, why?

Craig Lewis: Yeah, so that’s when I say they haven’t demonstrated a market failure. They already have a net best execution requirement. FINRA, the MSRB, both are tasked with sort of monitoring best execution. Somehow, I believe you need to demonstrate that FINRA is not doing a good job. They haven’t done that and they haven’t made an attempt to do that. FINRA seems to be functioning as intended and my view is, well, maybe full disclosure, I’m on the economic advisory board for FINRA, but I do believe that FINRA does a fine job with monitoring best execution. So what can you say at the end of the day? Show me the problem. And that’s the place where you do it. What is it that FINRA does that you don’t like? And why don’t you have the ability, if you have a problem with FINRA, to tell them to do things differently? I do believe you have that sort of authority.

Joe Seidel: Indeed, So another piece of it apart from the midpoint liquidity analysis was the execution quality analysis that they presented. What are your thoughts on that?

Craig Lewis: So one of the things that I noticed when I was going through some of these rulemakings is that the way the SEC justified this best execution rule, as well as the order competition rule, was they basically said, look, it appears to us that investors do better on exchanges than they do at wholesalers. The way they measure this is they do an analysis of realized spreads. And what they do is they show they have some statistics that are derived from 605 reports, which are reports that provide this type of information. They’re public. They show that if you look at all the orders that are executed on exchanges and wholesalers, that the realized spreads for wholesalers is about 61 basis points. And they argue that that is a measure of wholesaler profitability. The realized spread is what we believe one way to think about that is that’s how much money that a wholesaler is making. If you look at realized spreads reported at 605 that are executed at exchange across all marketable orders, market orders, and marketable limit orders, that that realized spread is minus 38 basis points. So they take that difference and say that’s potential price improvement that somehow these wholesalers are making a lot more money on the average typical trade than they would get if they went to the exchanges themselves.

First of all, I have a problem with using realized spreads. It’s not to me because for 605 reports, these realized spreads are estimated at five-minute increments for the rule that it’s more of a measure of adverse selection than it is a measure of dealer profitability. So that would be my first problem I have with it. So the SEC understands that, and so what they do is they say, well, let’s use CAT analysis. Let’s use the CAT data and we shrink the time interval from five to one minute. The problem I have with these analyses is that the nature of the orders that are executed by wholesalers is fundamentally different than what they route to an exchange.

So to give you some idea of this, if you look at market orders, basically when a wholesaler gets a retail order, it makes a decision. Do I internalize it or do I send it somewhere else? When they internalize it, basically, they do that about 79% of all the orders that are executed by a wholesaler are marketable orders. The other 21 % are marketable limit orders. If you look at what’s executed on an exchange, .3% of all orders on the exchange are market orders and 99.7% of all orders are marketable limit orders and that’s because that’s the way that works. If you’re going to compare the realized spreads that a wholesaler has to the realized spreads that the exchange is reporting, at a minimum, you’re trying to ask the question, what would be the realized spread if the wholesaler had to route everything to the exchange? I show that if you actually adjust for that distribution, instead of the wholesaler still has the same realized spread across all its order types of 61 basis points, but on the exchange, it goes up to 182 basis points. So instead of it being minus 38 basis points, it’s actually 182 basis points. And that’s because the exchange doesn’t do very well on market orders.

So even using the SEC’s own approach, where all I do is make one small change that says, let’s actually control for the distribution of order type, I completely reverse the SEC’s findings and instead of showing that you get worse price improvement on by a wholesaler, a simple adjustment that you would expect them to make, they actually show that wholesalers have more price improvement.

Joe Seidel: So they’re essentially comparing apples to oranges.

Craig Lewis: Exactly. It’s a total apples-to-oranges comparison.

Joe Seidel: That’s very interesting. I think hopefully people will take notice of that, or the Commission will take notice of that as they proceed on that. So you’ve kind of shared sort of your views and I think you’ve been very helpful in sharing your views on how the best execution world works and the support for it. Any sort of further thoughts you might want to share with the audience before we conclude on either DERA itself and the framework or on best execution specifically?

Craig Lewis: So I don’t have any specific comments on best execution, but I would say that one of the things that I think has been troublesome with the SEC’s approach to economic analysis and rulemaking as a whole is I do believe that the SEC has been, when they do quantification exercises to support the economic analysis, I believe that sometimes they take approaches that are expedient and allow them to complete the rule proposal quickly. Whereas there are other approaches they could have used. They do have the data that they choose not to because they’re much more complicated, they would take the staff a much longer time to get a rule proposal out. So it appears to me that there’s an effort by the Commission to accelerate the rate at which rules get proposed.

And then what has gone along with that is that they offered really short comment process periods. So they find ways to bring rules out quickly, but then they short-circuit the comment process by saying we have 30 days to comment on a rule. Given how long these analyses take to prepare and the comments to prepare, and given the rate at which these rules have come out, it’s very hard for market participants to provide meaningful feedback to the Commission that the Commission should be thinking is helpful to getting to a better rule. So, I believe that the comment process is hugely important. People should be participating in the comment process, but the Commission has an obligation to solicit that feedback and actually embrace that feedback.

Joe Seidel: That’s a great way to conclude. Thank you, Craig, for this discussion. It’s been fascinating and thank you all for listening to us today. To learn more about SIFMA and our work to promote effective and resilient capital markets, please visit us at www.sifma.org. Thank you very much, Craig, and to the audience for your time today.

Craig Lewis: Thank you, Joe.

Joseph Seidel is Chief Operating Officer of SIFMA. 

Craig M. Lewis is the Madison S. Wigginton Professor of Finance, Emeritus at the Owen Graduate School of Management at Vanderbilt University. He is a former Chief Economist and Director of the Division of Economic and Risk Analysis (DERA) at the U.S. Securities and Exchange Commission (SEC).