25/1/2024
A common project for both companies: to transform the ESEF regulatory obligation into a financial communication opportunity for all players.
As an equity analyst and CFA charterholder, Boris Bourdet is well acquainted with the data-related challenges currently faced by financial professionals. In 2018, he created EquitInsight, a financial modeling and valuation platform for equity investors. But accessing the right data at the right time remains an issue… He told us why.
I started my career in 2003, as a financial auditor at Ernst & Young. For a few years, I helped review many company accounts, before joining Natixis Securities as a sell-side equity financial analyst. There, I followed a dozen companies in the General Retail and Luxury Goods sectors, including Inditex, Kering, etc. My job was to dissect accounts and gather all relevant data on the companies covered to produce the right investment recommendations for the clients (buy-side analysts and equity fund managers).
From 2011, I became an investor myself within DNCA Finance, an asset management company: as an equity analyst and portfolio manager, my job was still to dissect accounts and gather the right data, in order to invest on the right stocks, but my coverage was much broader: I had to monitor about a hundred companies from very diverse sectors, of which several dozen where actually in our equity portfolios.
Throughout this period, I was struck by the "handmade" dimension of financial modeling, which involves very time-consuming tasks such as data crunching from PDF files. This is why in 2018, I decided to create EquitInsight, a solution which provides up-to-date models for financial analysts, so that they can save time and focus on tasks with a higher added value.
First of all, the stakes are not exactly the same on the buy-side and on the sell-side, although in both cases the analyst must keep a close watch on the companies he follows in order to deliver opinions on the past and, above all, future value of companies. If you work on the buy-side, your analyses and recommendations are intended for the fund manager of the asset management company you work for. If you're sell-side, your work is geared toward your clients – i.e. the investors who pay for this advice, including asset management companies.
On disclosure day, you often need to peel through press releases and update the models of companies as diverse as Airbus, Nestlé, Kering, Reckitt, Schneider or Total… all at the same time, while listening to overlapping conference calls! Obviously, in this context, the risk of making an error is high and consequence-heavy… But there is no other option: we need the data.
Remember, analysts identify undervalued companies to purchase and, conversely, overvalued companies to sell. To do this, we use all available company and market data to build a financial model with the right data and make the right assumptions about future activity (revenue growth rate, operating margin rate, Capex...). Based on these models, we are able to determine a theoretical price objective – if this target is higher than the current market price, it is a buying opportunity, and vice versa.
Yet although they are essential to support our understanding of a company, models have no value if they are made under false assumptions. And since the only way to avoid making false assumptions is to gather as much data as possible, analysts spend a lot of time reading reports from other experts, meeting managers, or visiting company sites (factories, stores, logistics centers, etc.).
With the increasing amount of available data, this work is potentially never-ending. For analysts, this means an ever-growing portion of their time spent searching for data... rather than using models to value and compare more companies.
Accessing data may be a crucial step for ourwork, but it's also one of the trickiest…
When it comes to it, our first concern is reliability. There is an immense amount of data available on the market, on tens of thousands of companies. However, much like ready-cooked dishes, we are not always able to say what they contain. In many cases, the available aggregates (adjusted EBIT, Capex…) do not matchthe data communicated by companies’ management. You quickly end up comparing apples and oranges and losing all confidence in the data you use. Accessing raw data is the only real way to keep control of what is being modeled.
Beyond reliability, there is also the issue of granularity. Accounting regulations allow European groups some flexibility in their accounting presentation choices, but we need to be able to compare companies based on homogeneous data. So for some groups, it is sometimes necessary to fill the blanks with data from the appendices: repayment of lease liabilities, divisional breakdown, non-recurring items,... Hence the need to access data on a very granular level.
Another major concern is the diversity of sources: analysts use items from multiple sources besides the official accounts. Press releases and presentation slides are extremely useful to understand the financials. For instance, they include "non-GAAP" indicators (Like-for-like sales in retail, RevPAR in the hotel industry, aircraft deliveries in aeronautics, volumes in food & beverages, Core Tier 1 ratios in banks… ) that are particularly scrutinized by the market.
I would add that timing is of the essence. Markets are becoming more and more sophisticated, and information circulates very quickly. Analysts must adapt and assimilate the information quickly and correctly. A human alone could never keep up with all this data, and that's why digitization is so crucial to financial analysis.
This is all the more true as markets are constantly changing. Accounting standards keep evolving, and so do the companies' scope of activity, which often forces us to update the history of accounts (pro forma) and the latest figures… which adds to the burden of data crunching.
Unfortunately… through data crunching! So far, I have not found a solution that allows me to free myself from data crunching without increasing the risk of error, or without diluting the relevance of my financial models as a stock picker.
As I said, this is time-consuming. I have to search for publications on each individual company's website, find the data I am looking for (usually hidden in meter-long PDFs) and copy-paste it in my own documents… and as most people know, copy-pasting usually comes with errors, so I need to double-check my entries. Each publication takes between 15 minutes and 1 hour depending on whether I am simply dealing with quarterly revenues, or the whole full year financial results.
I have not found a better option since publications are not currently centralized in a single access point. The only viable alternative would be to access raw data in a digital format. Data previously reworked by an external provider doesn't count, since it is not raw and thus not reliable enough.
Today, the asset management market tends to polarize with passive management on the one hand, which places little value on the asset manager, and focuses more on portfolio diversification than on the choice of securities. At the other end of the spectrum, asset management companies that still differentiate themselves by the quality of their teams must strengthen their tools to remain efficient and justify their margins. The players « stuck-in-the-middle » must choose their side: either they drift toward passive management or they can strengthen their muscles thanks to the digitization of reporting.
Of course, I prefer this second approach: by having real-time access to financial information, tomorrow’s managers will be able to be particularly responsive in their management and generate more alpha than passive management. But for this to happen, the data ecosystem needs to be modernized!
Fortunately, Corporatings have developed Open Data solutions to help analysts access raw company data… Discover them now!