Mikaël LeBlanc
By Mikaël May 27, 2024

How can web development benefit from AI coding assistants?

One of the most hotly debated topics today is the growing impact of artificial intelligence (AI) on our society. While some worry it will steal jobs and disrupt the economy, we are optimistic that it will boost developer productivity significantly. Even with the limited scope of the tools currently available on the market, the potential for efficiency gains is almost endless.

In this article, we will help you choose the AI model best suited to your needs and we will offer our recommendations for fully leveraging the tools available to developers.

Screenshot of an example of code generation with Gemini, Google's artificial intelligence platform.
Source: Google Cloud

Selecting the Best AI Model for Your Situation

Given that each model has its own particularities, you should consider the constraints specific to your needs to make your choice.

Maximizing Power

One of the greatest benefits of Google’s Gemini 1.5 Pro is its modern context window, which supports nearly a million tokens and offers excellent short-term information retention. Developers can leverage this feature in their daily work by including a large part of the codebase in context to improve productivity and efficiency.

Prioritizing Value for Money

If you want to prioritize a lower cost of ownership, a more active development ecosystem, and/or full access to the code, Meta’s Llama 3 70b model is an excellent choice. Its open-source license and efficiency relative to its size make it an attractive option.

Guaranteeing The Best User Experience

Many assistants are inseparable from the model on which they depend or require significant manipulation to connect to different models. Is it worth the risk? It’s up to you to decide, but if you don’t have specific needs, the assistant’s user experience will likely weigh more heavily in your decision-making process.

Choosing The Right Assistant

It’s important to differentiate between a code assistant software and the AI model itself. In this regard, the code assistant essentially functions as an interface to the model while also influencing the end user experience.

Depending on the chosen model, its interface may be predetermined. For example, the only template available for use with GitHub Copilot is GPT-4 by OpenAI. On the other hand, Continue.Dev offers a wider variety of templates but lacks integration with Visual Studio.

Code assistants can be used in various ways, but typically follow two main trends. The first involves direct integration with the editor by increasing auto-completion and considering context, such as selection, open files, Command-Line Interface (CLI) output, and more. The second trend involves offering a simple chat window, as seen on well-known tools like ChatGPT.

Here are a few assistants to consider, depending on your needs:

  • GitHub Copilot: A reliable choice from a reputable company, providing a complete solution.
  • Continue.Dev: Flexible, it allows you to use either the best available model or more cost-effective, potentially free, alternatives.
  • Perplexity: An improved web search interface, offering access to continuously updated sources.
  • Meta AI: Utilizes the latest Llama 3 model, renowned for its exceptional coding capabilities.
  • Cody: Enhances understanding of the codebase through Sourcegraph’s indexer, providing valuable insights.

Maximizing Your Code Assistant’s Potential

You’ve successfully set up your code assistant, and it’s ready to respond to your commands. But what’s next? Here are some common use cases:

Generating Simple Code and Unit Tests

The immediate impulse with assistants is to request direct code generation. However, generated code may often be incorrect, incomplete, unsafe, or deviate from established standards. The golden rule is to review the generated code thoroughly.

With this in mind, a code assistant can significantly save time, particularly in creating simple and/or repetitive flows. Even better, it can generate tests based on given requirements, facilitating Test-Driven Development (TDD) seamlessly!

Code Refactoring

A second opinion never hurts! Consider your assistant as an intelligent “rubber ducky.” You can request it to improve, secure, or rethink an existing class. Our experience indicates that Large Language Models (LLMs) always have insights to offer when prompted. While not every suggestion may be applicable, the useful advice outweighs the occasional miss.

Commit Logs and Pull Request Summaries

Many developers often dismiss the idea of describing code in natural language. The belief is that well-written code should inherently be clear and self-descriptive.

However, in certain contexts like commit logs or pull request summaries, their significance should not be underestimated. Letting an assistant write them accomplishes two objectives: it eliminates redundant work and ensures the clarity of the written code. When even an AI assistant understands our intentions, it means we can be confident in our work.

Example of GitHub Copilot generating lines of code

 

Pull Request Reviewing

Similarly, why not seek supplementary feedback on all lines of code generated by your team? If you’re using an AI solution with an API, you could even automate the entire process within a CI/CD pipeline! There’s nothing like establishing a shared enemy for your entire team to alleviate the tensions associated with code reviews! ☺️

Help with Bugfix

Some bugs can be so challenging that you feel like consulting a rubber duck… Thankfully, those days are behind us, and now AI can (sometimes) provide answers to our queries! In fact, certain tools like GitHub Copilot and Continue even provide the option to include command prompt output and external documentation alongside the code in question. While there’s no guarantee of a solution, it’s always worth a shot!

Retrieval Augmented Generation

Certain AI assistant solutions facilitate Retrieval Augmented Generation (RAG) using local documents or a search engine. This approach offers an intermediary level of interpretation between your query, its context, and the diverse resources at hand. Imagine asking a question directly on StackOverflow or within your internal documentation and promptly receiving the answer!

Screenshot of the Perplexity AI platform, which combines a search engine and a conversational agent.

Optimizing Model Performance

It may seem counter-intuitive at first glance, but the way you address an LLM has a big impact on the quality of the response you receive. Whether it’s the way you phrase your sentences, the vocabulary you use or the role you assign to the assistant, the LLM is even sensitive to encouragement!

Articles are published regularly detailing prompting techniques capable of boosting the effectiveness of existing models by simply formatting requests in specific ways.

Expert Prompting

In practical terms, an LLM functions as a complex statistical algorithm with a singular purpose: predicting the next word in a text. Consequently, when the original text implies reliability or underscores the necessity of accuracy due to external factors, the quality of the response tends to improve. This phenomenon is referred to as expert prompting in the literature.

Despite its seeming improbability, specifying the AI’s role by prefacing a request with phrases like “As an expert in field X with Y years of experience…” or concluding with statements such as “Please do your best, my job and the lives of 20 kittens depend on it!” can delineate the difference between a valuable answer and a futile endeavor.

Example of a prompt sent to a platform in several different requests.

 

Few-Shot Prompting

The way LLMs interpret and integrate information makes the structure of queries just as important as their content. This is particularly evident in tasks like code generation. By making overly detailed requests, we risk overwhelming the model with too much information. An effective technique is to keep requests short and simple, even if the initial response is incomplete, and then iterate, gradually requesting additions and modifications.

Chain-of-Thoughts Prompting (CoT)

One of the techniques described in an article that remains useful even two years after its publication is the Chain of Thought (CoT). This concept is straightforward and relatable: to solve a problem effectively, it’s often easier to break it down into logical steps and address them one by one.

However, the reason why this technique is so effective for artificial intelligence is slightly different. Since LLMs function as algorithms that predict the next word in a text, prompting them to structure their reasoning forces them to focus on smaller parts of the problem while writing the relevant step. This approach also keeps the most recent steps fresh in the context, helping future reasoning.

Conclusion

Although the field is only a year or two old, the tools and techniques presented in this article are just the tip of the iceberg of the potential productivity gains promised by AI.

With a bit of creativity, we can now set up customized models, implement autonomous agents that interface with external systems, and even integrate AI functionalities into traditional products.

Uzinakod’s experts are already working with code assistants on our projects. To discover some examples, visit the Case Studies section of our website.

Recommended Articles
Published on February 26, 2024

Decoding Excellence with Uzinakod: Expert Perspectives on Effective Code Review

Explore the benefits of code reviews with Uzinakod, an essential practice for improving quality, encouraging collaboration and reducing errors.

Read more
Published on August 21, 2023

Tech Report - 4 Artificial Intelligences to Maximise your Productivity

We put together a list of valuable AI tools that will boost your productivity at work and undoubtedly become your new allies.

Read more
Search the site
Share on