Ever since I first heard about them, I have become a great fan of unit tests, Test-Driven-Development, and CI/CD in general. Whenever I can, I incorporate these into my demos. I wrote a fair bit about these topics in Implementing DevOps Principles with Oracle Database, if you’re curious to learn more, head over to the PDF and have a look for yourself. The article assumes you’re vaguely familiar with the concepts.
What/why/where?
I created a not-quite-as-short a demo for APEX World 2026 where I’ll showcase how to use JavaScript in the database to good effect. You’ll find the entire project in my GitHub. I have previously used VSCode/Cline/gpt-5 to assist me with the task of writing a demo, but this time I put my faith into the technology and used VSCode, Codex and gpt- 5-3-codex to create unit test for my application. What application? Let me explain…
WARNING: before I start, let me put out a word of warning! You may not be allowed to use AI in your organisation. Before you get too excited, please consult with your security team and ask them about the AI strategy, approved tools and models, and anything else that might be of importance. Don’t use AI unless you’ve been authorised to do so, there can be side effects.
Let there be code
This post builds on the foundation of the previous one where I combine MLE/JavaScript with Oracle REST Data Services (ORDS) to extend my application with a REST API. All handlers, executed in the database, are written in JavaScript. As usual, I use the SQLcl projects workflow to deploy my application, and I’ve already built release v-1.0.0. This includes the ORDS configuration and all handlers. There’s a CI/CD Pipeline, too, deploying the generated artefact against an ORDS-enabled database to see if it all works.
In a nutshell, the app allows you to create action items. Each of these has an id (primary key), and a status of either OPEN or COMPLETED. A team with a minimum size of 1 person works on any given action item. This person must be the LEAD, other team members are simply MEMBER. Here’s an example for such an action item in its JSON representation:
{ "actionId": 2, "actionName": "conduct q1 customer feedback interviews", "status": "OPEN", "team": [ { "assignmentId": 6, "role": "MEMBER", "staffId": 5, "staffName": "elliott brooks" }, { "assignmentId": 5, "role": "LEAD", "staffId": 1, "staffName": "avery johnson" } ]}
A JSON Schema validates the incoming data to ensure they match business requirements, as described in my earlier article.
The only missing part: unit tests. Just as with any code, it’s important to have unit tests associated with the REST API. As you just read, I didn’t implement those yet.
Tangible benefits and productivity gain
Writing unit tests can be tedious, but it doesn’t have to be (anymore). I probably spent 30 minutes creating these unit tests, and that’s a price I’m more than willing to pay. Catching regressions early is much better than troubleshooting production issues at 2 am in the morning. I have done that, and didn’t appreciate the stress.
The intention is to use Vitest to create unit tests. There are lots of frameworks out there in the JavaScript world, I just happen to like vitest. It’s not an endorsement…
I want the unit tests to use the fetch() API to read from and write to my ORDS endpoints. For each REST verb (GET, POST, PUT, DELETE) I need 2 tests: one that succeeds, and another one that fails.As per the introduction, I’m using Codex (as the VSCode plugin) with gpt-codex-5-3.
Here’s what I did, step by step.
Storing the OpenAPI Catalog
With ORDS you get the OpenAPI Catalog for your module free of charge. In my case, it’s available at:
http://localhost:8080/ords/emily/open-api-catalog/js/
I downloaded the file and stored it in api/openapi-catalog.json. Note that I was too lazy to create a certificate to protect the endpoint with TLS. Please don’t do that! It’s fine for my playground environment, not so much for anything else.
Asking Codex to create unit tests
Next I can ask Codex to create the unit tests for my REST endpoints. Here’s my prompt (I’m in plan mode at this point). Note that I have a simple agents.md file in place in my project, too.
Read openapi-catalog.json and create a set of unit tests for each REST verb (GET, POST, PUT, DELETE). At the very minimum, ensure that you have a positive and a negative test for each endpoint. Read the sample-data.sql script in case you need to access seed data. Use vitest as the testing framework, do not mock API access, use the information from the openapi catalog file. Show me the plan you are proposing before starting with the implementation
Codex thought for a bit, then responded:
I’ll inspect api/openapi-catalog.json, the SQL seed script, and the current project test setup first, then I’ll give you a concrete implementation plan before writing any tests.
I found vitest in dependencies and located the seed script under dist/releases/…/_custom/sample-data.sql. Next I’ll inspect package.json and api/openapi-catalog.json to identify runnable test command and all endpoints/verbs to cover.
I’ve confirmed the API surface from OpenAPI; I’m now checking the JavaScript handler implementation to map valid/invalid payloads and status codes so the test plan is accurate before coding.
Proposed plan before implementation:
- Parse API contract and lock base URL
- Use api/openapi-catalog.json as the source of truth.
- Target server: http://localhost:8080/ords/emily/js.
- Endpoints to cover:
- GET /actionItem/
- POST /actionItem/
- GET /actionItem/{id}
- PUT /actionItem/{id}
- DELETE /actionItem/{id}
- Create Vitest integration-style test suite (no mocks)
- Add a new test file under a standard test folder (for example tests/action-item.api.test.js).
- Use real HTTP calls via fetch against ORDS.
- Keep shared helpers in the same file first (request helper, payload builders, cleanup list).
- Build test data strategy from sample-data.sql
- Reuse seeded staff IDs (1..10) and known action IDs (2,3,4) for read/update cases.
- For write/delete tests, create records during test execution and clean them up to avoid state pollution.
- Implement minimum required positive + negative coverage per REST verb
- GET /actionItem/
- Positive: default list returns 200 and expected response shape.
- Negative: invalid query (e.g. search with illegal chars or bad limit/offset) returns 400.
- POST /actionItem/
- Positive: valid payload returns 201 and created object.
- Negative: schema-invalid payload (e.g. short actionName or invalid team) returns 400.
- GET /actionItem/{id}
- Positive: existing seeded ID (e.g. 2) returns 200.
- Negative: invalid ID format (non-numeric) returns 400.
- PUT /actionItem/{id}
- Positive: valid update payload for seeded action returns 200.
- Negative: invalid payload/body schema returns 400.
- DELETE /actionItem/{id}
- Positive: delete a freshly created action returns 204.
- Negative: invalid/non-numeric ID returns 400 (or numeric missing ID 404 if preferable).
- GET /actionItem/
- Make test execution straightforward
- If missing, add npm test script using Vitest (vitest run).
- Keep tests deterministic and independent (setup/teardown where needed).
- Validate locally
- Run the test suite and report:
- Passed/failed tests
- Any environment prerequisites (ORDS/database must be running with seed data loaded).
If you approve this plan, I’ll start implementing it now.
I did indeed approve, and off he went. Changes were made to the package.json, package-lock.json and of course the file it created containing the unit tests, tests/action-item.api.test.js. It’s remarkable, at least to me, how Codex managed to gather all required information from the file system, and makes use of it. Everything in the plan was correct.
Code quality
I was equally surprised by the code quality, everything worked at first attempt?!?!
That’s where the productivity gain is made: I know how to write unit tests in vitest, but that doesn’t mean I can type them into the system as quickly as Codex can. As an extra benefit, no made-up data was inserted into the database, no integrity constraints violated. As per the plan, the AI referred to the sample-data.sql script to understand valid values. At the beginning of this article I outlined the structure of a valid payload, and that’s exactly what Codex created:
function validCreateBody(suffix = 'base') { return { actionName: `new action ${suffix} task`, status: 'OPEN', team: [ { role: 'LEAD', staffName: 'avery johnson', staffId: 1 }, { role: 'MEMBER', staffName: 'blake ramirez', staffId: 2 }, ], };}
Staff IDs map to the staff table in the database, and the names match those keys. Nice. Codex was also kind enough to clean up after each test:
afterEach(async () => { for (const id of Array.from(createdIds)) { try { await request(`${ACTION_ITEM_ITEM_PATH}/${id}`, { method: 'DELETE' }); } finally { createdIds.delete(id); } }});
Each test for a given REST verb was neatly described, GETting the collection for example is shown here:
describe('GET /actionItem/', () => { it('returns list of action items (positive)', async () => { const { response, payload } = await request(ACTION_ITEM_COLLECTION_PATH); expect(response.status).toBe(200); expect(payload).toBeTruthy(); expect(payload.items).toBeTypeOf('object'); expect(Array.isArray(payload.items)).toBe(true); expect(payload).toHaveProperty('hasMore'); expect(payload).toHaveProperty('totalRows'); }); it('rejects invalid query params (negative)', async () => { const { response, payload } = await request( `${ACTION_ITEM_COLLECTION_PATH}?search=invalid***` ); expect(response.status).toBe(400); expect(payload).toBeTruthy(); });});
The negative test put my logic to the test, too, providing a non integer to the endpoint as an ID. And it all worked! I added a few more spot-checks, but altogether I didn’t find anything standing out.
Test execution
The tests worked fine at first attempt. Which is more than I can say about my unit tests … Here’s an example:
$ npm run test> test> vitest run RUN v4.0.18 /Users/martin.b.bach/devel/java/conference-talks ✓ tests/action-item.api.test.js (10 tests) 550ms ✓ GET /actionItem/ (2) ✓ returns list of action items (positive) 228ms ✓ rejects invalid query params (negative) 35ms ✓ POST /actionItem/ (2) ✓ creates an action item (positive) 73ms ✓ rejects invalid payload (negative) 21ms ✓ GET /actionItem/{id} (2) ✓ returns an existing action item (positive) 21ms ✓ rejects non-numeric id (negative) 18ms ✓ PUT /actionItem/{id} (2) ✓ updates an existing action item (positive) 68ms ✓ rejects invalid update payload (negative) 19ms ✓ DELETE /actionItem/{id} (2) ✓ deletes an existing action item (positive) 45ms ✓ rejects non-numeric id (negative) 20ms Test Files 1 passed (1) Tests 10 passed (10) Start at 17:03:27 Duration 671ms (transform 19ms, setup 0ms, import 29ms, tests 550ms, environment 0ms)
This has been fun!
Summary
AI is part of the development process, and it’s here to stay as it seems. According to the latest DORA Report, more than 90% of all developers use AI in some capacity. Done right, it can provide amazing benefits as witnessed in this article. I suggest you head over to dora.dev to learn more about the impact of AI. But be advised: AI isn’t always producing the same results. It’s quite possible that you’re getting something different every time you run Codex, and you’ve got to cater for that situation. Implement guard rails (-> agents.md etc), save the prompts you used, and the results. Later on, when you don’t remember every detail of your communication with the AI, this will come in handy.
Oh, yeah, and apologies for sounding like a broken record but, again, make sure you have consulted your security team before using AI.