Skip to content

Project 4: Backend Development

Learning Goals

  • Learn how to transform development specifications into a working backend.
  • Learn how to collaborate with LLMs to generate backend code, while refining and debugging the output, with a real or mocked frontend.

Project Context

In P1, you defined requirements and user stories. In P2, you expanded some of those stories into detailed development specifications. In P3, you implemented two user stories for the front end of your application. In P4, you will implement the backend of your application.

Begin by reading over the user stories you chose to implement for P3. For each of these user stories, you had the LLM generate development specifications in P2. Look over the architecture plans for each of these development specs. First, if you haven't done it yet, have the LLM harmonize the two architecture plans and diagrams to ensure that the LLM knows you are building a single backend for the application that can support both user stories.

For each module in your architecture, decide whether it will be part of the frontend or backend. Recall that frontend components usually handle the application's user interfaces and the business logic. The backend often handles data storage (separate data stores for each tenant), compute-intensive algorithms (e.g. machine learning, audio/video codecs, path routing), calls out to external backend services (e.g. speech transcription, image recognition, authentication, cryptography, etc.), and networking to and from other frontend UIs connected to the same backend. In P4, you only need to specify and implement the backend modules.

By the end of this sprint, you'll have a working backend implementation plus loads of documentation describing how it works. Recall that every module lives in a single backend and should make the same technical choices to minimize the number of redundant internal and external dependencies.

Remember to use the LLMs as much as possible to generate your deliverables. You may not modify any generated code directly, only by prompting the LLM.

Note

Make sure your backend supports 10 simultaneous frontend users. Simultaneity means that all of those users' frontend UIs are talking to the backend at the same time. Do not attempt to make your backend scale to more users.

Note

Consider whether you need to have a working frontend to tell if your backend works or not. It is ok to mock the frontend (from the backend's point of view) if it helps you make timely progress.

Note

Your backends will (eventually) be deployed to Amazon Web Services (AWS). Let that influence your choice of external backend services, especially if the cost is zero or minimal. In P4, you need to mock any calls to external services. However in P5, you will be able to create an AWS account with $100-200 free credits and will be usable for 6 months (or until the credits are exhausted) to call those services.


Deliverables

You will implement the backend for the same two user stories that you implemented in P3. Remember that one of these user stories is independent and the other is dependent on the first one.

1. Update your Development Specs

Create new development specs for each of your two chosen user stories that have a single, harmonized backend specification, i.e. the specs assume that there's a single backend that powers both user stories.

  • Include the entire development spec for each user story.
  • Have the LLM generate mermaid diagrams for any required diagram and add photos of those diagrams to your submission.

2. Specify the Backend

Architecture

  • Create a single, unified architecture for the backend that supports both user stories. Write down a text description to describe it and draw it as a Mermaid diagram. Justify your design choices as if you were speaking to a senior architect with your company.

Backend Modules

For every backend module in this architecture, do the following (with assistance from the LLM):

  1. Specify the module's features. What can it do? What does it not do? These should be written to be understandable by a professional backend developer.
  2. Design the internal architecture for the module. Write down a text description to describe it and draw it as a Mermaid diagram. Justify your design choices as if you were speaking to a senior architect with your company.
  3. Define the data abstraction used in the module. If it helps you or the LLM to think about this formally, take a look at Reading 13 from MIT's 6.005 class.
  4. Determine the stable storage mechanism for the module (i.e. you can't just use an in-memory data structure because your app might crash and lose its memory. Customers really hate data loss.)
    1. Define any data schemas required to communicate with any storage databases.
  5. Define a clear, unambiguous API for external callers of this module. We suggest employing a REST API for any services accessible over the web.
  6. Provide a list of all class, method, and field declarations. Identify which are externally visible and which are private to the module.
  7. Draw a Mermaid class hierarchy diagram that shows the module-internal view of each class.
  8. Use the LLM to generate the code for each class.

Wrap it up

  1. Write and run the minimum required testing code to ensure that the user stories whose program path uses each module's API works as expected. Don't worry about exceptional cases for now.
  2. Check your code into the GitHub repository for your project.
  3. Check in any code you write to test the functionality of your module.
  4. Create a README for your backend source code.
    1. Describe every dependency on an external library, framework, technology, or service required (or optionally required) by the module.
    2. What databases does this module create, read from, and write to?
    3. Describe how to intall, startup, stop, and reset the backend services and data storage. Assume the user of these docs is a site reliability engineer who has been newly assigned to work with your team.

3. Reflection

Write a 500-word (i.e., one-page) reflection on:

  1. How effective was the LLM in generating the backend code? What did you like about the result?
  2. What was wrong with what the LLM first generated? What were you able to fix it easily? What problems were more difficult to fix?
  3. How did you convince yourself that the implementation was complete and accomplished your user stories? Did you use the LLM to help?

Turn-in Instructions

Please turn in a single document that contains these parts:

  1. The two user stories that you implemented.
  2. The two (updated) development specifications and mermaid diagrams for those user stories as in the Update your Development Specs section.
  3. The specification and description of the unified backend architecture and its mermaid diagram.
  4. For each module in your backend, provide its specification as in the Backend Modules section.
  5. Provide a link to your source code in GitHub
  6. Provide a link to your test code in GitHub
  7. Provide a link to your backend's README in GitHub
  8. A 1-page reflection as in the Reflection section.
  9. Copy-paste logs of all LLM interactions you used during this sprint. Identify the name and version of the LLM used.