· research

One Year of My Research

At the end of my sophomore year, after one year of reading CV papers and feeling the huge pain to transfer latest CV work into real world product, I figured our fundamental research was far behind to allow us to pursue fancy product dreams. I vowed to advance research.

Having read The Ph.D. Grind, I knew how important it is to have an advisor with experience in reviewing top-tier conference papers. I sent my first email to a famous professor on CV in CS department, and he politely claimed he didn’t mentor undergraduates. I knew I have to try other research directions. By reading the same book, I developed some interest in software engineering research, believing boosting programmers’ productivity can indirectly create more brilliant products for this world, so I searched the keyword ‘Nanjing University’ in recent ICSE PC members, and found Chang Xu in ICSE 2017.

With some effort, I managed to get in touch with Yanyan Jiang, a PhD candidate and later an assistant professor in Chang Xu’s group. Start from here I will refer to Yanyan Jiang as JYY. JYY is a superstar in the group, who already had several top papers and owned experience of academic advising.

In our first meeting, JYY asked me to pick one of his current areas. I interpreted my options as either finding bugs in the wild or working on PBE. Back then, I naively despised the idea of PBE because I thought no one would ever trust a datasheet or a program synthesized by basically guessing, and I was sensitive to such soundness because I was so afraid of dead end after seeing how another PhD on requirement tracing struggled to graduate in this agile world. Funny how I didn’t realize it was the same underlying reason why his work was difficult and why PBE could be useful.

I told JYY I would like to do something about JavaScript, preferably finding new JavaScript bugs, because I had some front-end experience which lots of academics don’t have, and I knew JYY’s main expertise is at finding bugs.

I was suggested manually examining the issues of some open source application or framework to see if there were any that were both general and non-trivial. My first try went to React, thinking finding bugs there would be super impactful. Turned out there weren’t many issues there, and legend said Facebook really put considerable efforts in maintaining their open source projects. Of the existing issues, many required deep understanding of the React code base to understand what really happened. I ended up classifying those issues I could understand into three categories: memory leaks, wrong choice of API, compatibility with IE. I found most of these issues specific, added that I was told our group didn’t have experience in performance bugs, so I didn’t report those to JYY. In hindsight, it was a huge mistake. Just because those issues seemed specific doesn’t mean you can’t find a general cause by conducting deep analysis.

It’s important to note that my mindset back then was ‘we should select an existing work as a base, analyse it, and do some significant incremental work’, because that’s how I was trained in my last deep learning project. For a very long time I didn’t notice there exists other ways of research, and actually JYY is an example. His favorite style is to start a project by analysing a real world issue, and gradually connect the issue to a hole of existing research. Whatever, the next time I met with JYY, I thought by explaining the basic idea of how JavaScript works in browser and Node, JYY would be reminded of a fertile land of publication and point that out to me, so that we could have a paper to work against. After hearing my explanation of JS application, he suggested me collect some concrete examples of concurrency bugs in JS applications. We were not on the same channel, and I failed to recognize this earlier.

My searching of concurrency issues in OSS didn’t go very well. JavaScript designers deliberately protect its users from the concept of concurrency, insofar as every JavaScript application is a single-threaded event-loop with atomic functions (any JavaScript function cannot be interrupted by others). Searching related keywords in JavaScript repos went fruitless. I unilaterally planned to change a direction. After a brief explanation of why concurrency is not something in JavaScript world, I began throwing my chewed papers which I all felt somewhat interested to JYY to see which one did he feel interested: impact analysis, record-replay debugging, web service testing, and such, expecting the never-uttered sentence “interesting, let’s drill this paper”. JYY has a very picky taste and didn’t seem interested in any of those work.

Then I realized I had to go out to look for cases. I read an interesting story from Netflix that they encountered a performance bug due to their wrong assumption of how the Express API behave, and I quickly related to my experience of writing web applications with tons of dependencies. NPM is the largest software registry, and virtually every web application is dependent on at least one third-party library. If the API doesn’t have a good design, or the documentation is too scarce, then the programmer would use it bearing brittle understanding, causing errors that are hard to debug.

My reading with natural programming and empirical study on JavaScript bugs quickly convinced me that not understanding API is a huge source of bugs. Actually, API misuse detection is already a well-known topic in SE conferences. What mostly excited me was, JYY also showed interest in deepening this line of research. We hoped to understand how the API knowledge fails to get transfered from the author to the user, and how to fix such broken linkage.

Then the first semester ended, and from the start of the winter holidays I began committing myself to studying for TOEFL. English is much harder than I thought, and JYY promised to wait until I finished my test in May. I didn’t stop thinking about my research while studying English, because I knew my time was short. Influenced by the term of API learnability, I interpreted my targeted problem as programmers’ failure to learn, and therefore I went outside of computer science to read How Learning Works, and watched lectures on Human behavioral biology. The overarching idea in my head was to develop the framework of API documentation that can help any level of user learn better. I experimented with different kinds of weird theories on my diary, but failed to connect to an actionable CS research over and over. I started to doubt whether such framework could exist, and even whether my research was worth continuing.

When the new semester began, I started trying deliberately to contribute back to my research in my coursework. In my mobile development course, I conducted a survey to my classmates to ask their opinions on Stack Overflow, hoping to find new opportunities to improve crowd-sourced API documentation. In my application integration course, I persuaded my teammates to start a API recommendation project, to see what API knowledge we don’t have. Even my later work on fixing a vscode bug had the intent to see what knowledge was missing in the outside documentation but critical in debugging.

Somehow, I found my thoughts staying on a hypothetical level, lacking a research front to put my feet on. There were tons of research around API usability, but it seemed no existing literature ever tried to define a beautiful framework for API providers to design documentation that can fulfill diverse requirements for learning. I was stuck on this idea of ‘design’ because I strongly believe in the value of solving problems in the first place. Researchers had tried to recognize and complement missing knowledge in documentation. They studied the different types of knowledge an API could contain, mined code examples and insight sentences, and integrated such extra documentation into programming environment. Others had tried to discover wrong knowledge in documentation, finding directive defects. There were also some standard API description language and supplementary checking tools, but they only addressed trivial signature issues. No work seemed like a sufficient start for me to build a framework for API design. I thought I need a grand theory, especially one that can explain different users’ needs for learning, so my reading went further and further awry, and I kept getting unsatisfied. It seemed no previous researcher was thinking on the same scale as mine. As my junior year was approaching the end, I felt tremendously anxious and somewhat desperate. I felt like fighting a lonely war with no one around me in the lab sharing a similar goal. No one was seriously talking about API here. No one read much related work. No one in my peers wanted to apply for a PhD on such topic. Knowledge and learning has a deep connection with humanity and science majors, which I don’t have friends in. Ultimately, I was isolated. I had a hunch that a good research shouldn’t go like that, and I grew a burning desire to get out of the isolation.

Soon after my meeting with JYY resumed, I made a decision to rush an idea with some connection to PBE. I brought my nascent idea of synthesizing user scripts or CSS layout to JYY, and my oscillating position got sharply criticized. JYY warned me if I didn’t stick to one direction, we would break up. I left the lab, and the whole world looked dim. I didn’t know where to go, and came across another JYY’s student. I talked to him and he speculated how JYY might think I was a coward and an opportunist. I determined to go back to my previous API work. I can have no publication, but I cannot allow myself to be called a coward. How cute I was.

I speculated that JYY had some good reasons why I shouldn’t jump to PBE: expertise is never easy to acquire, especially on serious programming synthesis, which is at the hard core of computer science that required strong background in programming language and theories. Even though what I wanted to do were user scripts and style sheets, I needed to understand how these applications connect to the very heart of logic. My DL and SE background could only support me to scratch the surface of this field. If I were smarter, I should choose a better ship to jump, like going back to analysing and finding bugs.

Then I continued to explore Stack Overflow with JYY, which was still the most promising land of API research. JYY encouraged me to forget any grand goal and simply collect some solid data of problems there as a start, and we both found the revision histories interesting. I referred to the Strauss’s Grounded Theory, asking what actually happened there, and incrementally built through and interconnected categories. As I was working on this exciting project, my meandering one year in SPAR came to the end.

Update 1: After reading this post, JYY told me he actually had confidence for my PBE idea since Program Synthesis is an area he feels very passionate about, so it’s okay that I don’t have a strong background. For a student’s first research project, the advisor’s expertise on the topic matter a lot more than the student’s. His intent back in that day was to remind me of the importance of focus, and he would welcome if I really decided to jump to PBE.

Update 2: I want to add my thanks and reverence for Prof. Chang Xu. Though I directly work with JYY, Chang was present at our meetings a lot of times. He has made quite a few sharp and wise comments for our direction, and I only regret that I didn’t give enough thoughts to his early warning about the difficulty of addressing the problem of learning in a program analysis group.