AI Engineering Development Process

Motivation for AI Engineering Development Process

Artificial intelligence (AI) applications often involve not only classical application engineering but also elements of research. Sometimes it is not clear from the start which approach will be better and one needs to conduct experiments to evaluate multiple approaches. For example, if we are building a machine learning model we would need to evaluate and experiment with different features until we find an optimal feature set. 

Furthermore, if we are building machine learning models, usually debugging is not an easy task. Also in many cases, it is not trivial to evaluate the performance of statistical models and how this performance will translate to business value. All these factors can add an additional layer of complexity that the engineering teams need to cope with.

5 Tangible Metrics for the Speed of Software Engineering Teams

Introduction

Firstly,  since we want to discuss the topic of metrics, what does it mean for a software engineering team to perform well anyway? For the purpose of this post let us take the following metrics for a given software engineering team as the fundamental one:

  • Speed of development (try to maximize)
  • Number of bugs (try to minimize)

This model is clearly over-simplified, however, in favor of keeping this post brief, we will use it in this writing. Furthermore, we are concentrating on the technical perspective here. Alignment of business and engineering is crucial for success, however, it is a topic for another post.

Checklist for an Efficient Code Review

Why Is Code Review Important?

Peer code review is a widely used technique by software engineering teams. The intuition is that if more software engineers review the code, there will be fewer bugs, and, in general, the maintainability of the code will be improved.

According to this study, such intuition is justified. The authors examine code review coverage on open source codebases and find that: