Research interests

  • Accounting, Big Data, Machine Learning
  • Financial Intermediation, FinTech
  • Corporate Finance
  • Accounting Theory, Finance Theory
  • Structural Estimations

Working Papers


Using Machine Learning to Measure Conservatism with Jeremy Bertomeu, Edwige Cheynel, and Yifei Liao
Revised and resubmit at Management Science
SSRN
Abstract Machine learning can help improve empirical proxies of conservatism by detecting patterns beyond linear regression techniques assumed in prior literature. Using a neural network ap- proach, we demonstrate that the fit of differential timeliness almost doubles incorporating non- linearities and complex interactions. The model offers the promise of reducing noise in mea- surements and design more powerful tests to assess theories of conservatism. Measures based on machine-learning exhibit (a) fewer economically anomalous observations, (b) economic associations consistent with the literature, (c) less unexplained year-over-year instability, and reveal (d) a secular decline in conservatism. In simulations, we show that, while existing measures perform honorably even in the presence of a complex data-generating process that they were not designed to capture, proxies based on machine learning methods are the most robust to specification error, feature less attenuation bias, and reduce the incidence of false negatives and positives.
Transit Rents in a Gravity Model of Trade with Richard Friberg and Katrin Tinn
Abstract We study the global impact of rent extraction by countries that are in favorable locations to intermediate international trade. Using a novel measure of land-sea dis- tances, we estimate a modified gravity trade model (1993-2016) and show that transit rents sharply lower trade. We use our model and simulations to gauge the welfare effects of transit rents. While transit countries benefit, general equilibrium price dis- tortions impose substantial costs on all countries, interestingly also on those only indirectly affected (e.g. USA). Our results show that customs unions with transit countries mitigate the problem but free trade agreements do not.
Smart Lending
Abstract This paper shows that a data-based screening technology can increase the cost of financial intermediation. The use of data in the screening process reduces the acquisition of soft information by traditional lenders, which harms constrained borrowers further. Additionally, groups in which fewer borrowers were financed in the past are under-represented in the data, leading to a cross-sectional difference in screening efficiency. Screening is more efficient for borrowers with greater historical lending data. When traditional and technological lenders coexist, the borrowers about whom data can provide precise information raise funds from technological lenders while those with less informative historical data choose traditional lenders who can make up for the lack of hard data-based information by acquiring soft information. The intermediation cost is increased by the existence of technological lenders. I identify conditions under which traditional lenders benefit from restricting their own access to data-processing technology when competing against the technological lender.
Bank asset structure and the risk-taking implications of capital and liquidity requirements
Abstract In addition to risky loans, banks hold risky securities that provide uncertain future liquidity. This leads them to choose an asset structure with their desired correlation between liquidity and long term asset returns. We show that liquidity management and risk management concerns lead to a trade-off that creates an inverse relationship between security holdings and aggregate asset risk. Capital requirements mitigate liquidity risk in all future states of the world, thereby reducing the cost of liquidity risk and leading banks to increase aggregate asset risk. Liquidity requirements such as the Liquidity Coverage Ratio (LCR) affect high liquidity shock states and mitigate aggregate asset risk-taking. These results highlight the tension between capital and liquidity regulations in addressing the risk taking incentives of financial intermediaries.

In Progress


Casting a Chill: The Impact of ISDS Challenges by Foreign Investors on Domestic Regulation with Mark Maffett and Weijia Rao

When more information increases uncertainty: A new test of voluntary disclosure theory with Edwige Cheynel

Publications


Oliver Hart, La finance vue à travers la théorie des contrats incomplets (2017) with Gilles Chemla , Michel ALBOUY, Les Grands Auteurs en Finance, 2ème édition, Editions EMS

Computer Science Proceedings

Consistent Belief State Estimation, with Application to Mines (2011) with Adrien Couëtoux and Olivier Teytaud , International Conference on Technologies and Applications of Artificial Intelligence

DOI
Abstract Estimating the belief state is the main issue in games with Partial Observation. It is commonly done by heuristic methods, with no mathematical guarantee. We here focus on mathematically consistent belief state estimation methods, in the case of one-player games. We clearly separate the search algorithm (which might be e.g. alpha-beta or Monte-Carlo Tree Search) and the belief state estimation. We basically propose rejection methods and simple Monte-Carlo Markov Chain methods, with a time budget proportional to the time spent by the search algorithm on the situation at which the belief state is to be estimated; this is conveniently approximated by the number of simulations in the current node. While the approach is intended to be generic, we perform experiments on the well-known Mines game, available on most Windows and Linux distributions. Interestingly, it detects non-trivial facts, e.g. the fact that the probability of winning the game is not the same for different moves, even those with the same probability of immediate death. The rejection method, which is slow but has no parameter and which is consistent in a non-asymptotic setting, performed better than the MCMC method in spite of tuning efforts.

Continuous Rapid Action Value Estimates (2011) with Adrien Couëtoux, Mátyás Brendel, Hassen Doghmen, Michèle Sebag, and Olivier Teytaud , Asian Conference on Machine Learning Proceedings (ACML)

Abstract In the last decade, Monte-Carlo Tree Search (MCTS) has revolutionized the domain of large-scale Markov Decision Process problems. MCTS most often uses the Upper Confidence Tree algorithm to handle the exploration versus exploitation trade-off, while a few heuristics are used to guide the exploration in large search spaces. Among these heuristics is Rapid Action Value Estimate (RAVE). This paper is concerned with extending the RAVE heuristics to continuous action and state spaces. The approach is experimentally validated on two artificial benchmark problems: the treasure hunt game, and a real-world energy management problem.

Q-Learning with Double Progressive Widening: Application to Robotics (2011) with Nataliya Sokolovska and Olivier Teytaud , Lu BL., Zhang L., Kwok J. (eds) Neural Information Processing. ICONIP

DOI
Abstract Discretization of state and action spaces is a critical issue in Q-Learning. In our contribution, we propose a real-time adaptation of the discretization by the progressive widening technique which has been already used in bandit-based methods. Results are consistently converging to the optimum of the problem, without changing the parametrization for each new problem.