How I Implemented Search for Audit Logs During My Hacktoberfest Journey
Hacktoberfest: Contribution Chronicles
What Did I Contribute
As I mentioned in my previous blog, I will discuss my journey implementing a search feature for audit logs in OpsiMate. First, I created an issue related to adding a search bar for users. This enables users to filter and view specific audit logs quickly and efficiently. I made my first PR which was hilariously broken(I’ll explain why). Later I created a proper PR that I’ll discuss in detail.
How I Started
When I started, I planned to implement filtering logic on the client-side. However, CodeRabbitAI (code review tool used in OpsiMate) challenged my naive approach. I realized, I should implement server-side search to filter across all audit logs, not just the current page. While working on this PR, I accidentally merged upstream changes incorrectly because I didn’t properly update my remote origin and local main branches. When I made my another commit, a bunch of commits from other contributors appeared in my PR. I felt ashamed, but I realized the best way to handle this mess was through communication(not everything is about code). I wrote an apology message in my PR and created a fresh PR to keep the git history clean, mentioning that this PR would replace the previous one. I learned that we should never fear communication and mistakes. They are experiences, not failures.
Implementation Details
Client-side: Key Implementations
I created a custom useDebounce hook that delays the search query by 350ms, preventing excessive API calls while the user is typing:
function useDebounce<T>(value: T, delay: number): T
const [debouncedValue, setDebouncedValue] = useState<T>(value);
useEffect(() =>
const handler = setTimeout(() =>
setDebouncedValue(value);
, delay);
return () =>
clearTimeout(handler);
;
, [value, delay]);
return debouncedValue;
Then I wrapped the filter object in useMemo to cache it and only recalculate when dependencies change. I optimized filter management to prevent the page from slowing down when multiple filters are being applied:
const filters = useMemo(
() => (
search: debouncedFilters,
actionType,
resourceType,
),
[debouncedFilters, actionType, resourceType]
);
Server-side: Key Implementations
On the server-side, I created a private helper function called buildWhereClause() in the auditLogRepository.ts to centralize all filtering logic:
private buildWhereClause(filters:
userName?: string;
actionType?: string;
resourceType?: string;
resourceName?: string;
startTime?: string;
endTime?: string;
)
Instead of filter conditions throughout different functions, this single function builds the database WHERE clause. Both getAuditLogs() and countAuditLogs() use this same function, eliminating code duplication. One of the trickier parts was handling timestamps correctly. When users filter audit logs by date range, their local time zone might differ from the server’s, causing inconsistent results. To solve this, I implemented UTC normalization in the controller.ts file. I actually received this feedback from CodeRabbitAI, which showed me sometimes LLMs can provide valuable feedback for avoiding error-prone code.
Time Validation:
const validateTime = (timeParam: string | undefined, paramName: string): Date | null =>
if (!timeParam) return null;
const timeDate = new Date(timeParam);
if (isNaN(timeDate.getTime()))
throw new Error(`Invalid $paramName format. Expected ISO 8601 format.`);
return timeDate;
;
Testing: Key Implementations
After completing the feature, I wrote a review message thinking I was done. However, the maintainer asked me to add tests verifying that my new logic works as expected. This request led me to discover a bug in my implementation in auditLogRepository.ts.
The Buggy Code:
if (filters.userName || filters.resourceName) {
const userQuery = filters.userName?.toLowerCase();
const resourceQuery = filters.resourceName?.toLowerCase();
where.push('(LOWER(user_name) LIKE ? OR LOWER(resource_name) LIKE ?)');
params.push(`%$%`, `%$%`);
The issue here is that I was forcing both filters to be included in the query simultaneously, even when only one was provided. This created empty wildcard searches, which match everything which was not the intention.
The Fixed Code:
const searchQueries: string[] = [];
if (filters.userName)
searchQueries.push('LOWER(user_name) LIKE ?');
params.push(`%$filters.userName.toLowerCase()%`);
if (filters.resourceName)
searchQueries.push('LOWER(resource_name) LIKE ?');
params.push(`%$filters.resourceName.toLowerCase()%`);
if (searchQueries.length > 0)
where.push(`($searchQueries.join(' OR '))`);
This approach only adds filter conditions when they’re actually provided, avoiding empty wildcards.
The Test Case
res = await app
.get('/api/v1/audit?userName=Farhad&resourceName=web Service')
.set('Authorization', `Bearer $jwtToken`);
expect(res.status).toBe(200);
expect(res.body.logs).toBeDefined();
expect(res.body.logs.length).toBe(4); // Farhad (2) OR web Service (2) = 4
Following the maintainer’s suggestion, I created seed data with multiple audit logs. One example seed is:
action_type: AuditActionType.DELETE,
resource_type: AuditResourceType.SERVICE,
resource_id: '6',
user_id: 4,
user_name: 'Bob',
resource_name: 'web Service',
timestamp: '2025-07-18 13:25:10',
details: null,
I used existing types like AuditActionType and AuditResourceType to keep the codebase consistent. I created two test suites: one testing all filter combinations, and another validating parameter inputs and error handling.
Gained Knowledge
Through this experience, I realized that communication is key to building a quality product. Working alongside other contributors, I encountered git conflicts from simultaneous changes to the same files. These challenges deepened my understanding of git workflows and the git rebase command. CodeRabbitAI also proved me through this process, catching potential issues before they became problems and making the review process smoother for everyone involved.
