QRS 2024 Keynote 1

Automated Test Generation at Meta (Wednesday, July 3)


This keynote covers the work on automated test generation at Meta over the past seven years, briefly reviewing work on End-to-End and Social Testing through test generation platforms Sapienz, Virtual Alpha and WW/Mia, all of which have been deployed and found thousands of bugs before they hit production. The talk will also cover recent work on the current deployment of automated generation of unit tests using observation-based testing and LLM-based test extension, the latter being an example of Assured Large Language Model Software Engineering (LLMSE). Assured LLMSE places the language model within a wider Software Engineering workflow, thereby providing guarantees that code generated by the model does not regress existing behavior and improves it in some measurable way.


Mark Harman's avatar
Dr. Mark Harman

Research Scientist, Meta Platforms, Inc. (Full Time)

Professor of Software Engineering, University College London (UCL) (Part Time)

Mark Harman is a full-time Research Scientist at Meta Platforms in the Instagram Product Performance team, working on software engineering automation. He was previously in the Simulation-Based Testing (SBT) team at Meta, which he co-founded. The SBT team developed and deployed both the Sapienz and WW platforms for client- and server- side testing. Sapienz grew out of Majicke (a start up Mark co-founded) that was acquired by Facebook (now Meta Platforms) in 2017. Prior to working at Meta Platforms, Mark was head of Software Engineering at UCL and director of its CREST centre, where he remains a part time professor. In his more purely scientific work, he co-founded the field Search Based Software Engineering (SBSE) in 2001. He received the IEEE Harlan Mills Award and the ACM Outstanding Research Award in 2019 for his work, and was awarded a fellowship of the Royal Academy of Engineering in 2020.

Nadia Alshahwan's avatar
Dr. Nadia Alshahwan

Lead Engineer, Meta Platforms, Inc.

Nadia is a lead engineer at Meta Platforms in the Instagram Product Performance team working on automating all aspects of reliability, quality and performance. Previously she was leading the team that created Virtual Alpha, a population of realistic bots that the apps are released to first to get early reliability and performance signal. Virtual Alpha is used today as part of the release process for most major Meta apps. Before joining Meta, Nadia worked at JP Morgan Chase in Cyber Security and Information Architecture. She was also a researcher on automated software testing and malware detection at UCL and the University of Luxembourg.