Experimental Changes

This commit is contained in:
2026-04-13 14:20:04 -04:00
parent 95b4610927
commit 268928e9c5
42 changed files with 10431 additions and 13441 deletions

View File

@@ -0,0 +1,252 @@
JIRA Story - RAG Knowledge Base
Accounting IA-2691: Remove Nightly Cancel Process for Manual Cancels from MP Nightly
Chunk 1: Overview
Metadata:
storyId=IA-2691, type=overview, domain=manualCancel, workflow=nightlyProcessing
Content:
Purpose:
Extracts the manual cancel process from the MP nightly job to enable independent execution, ensuring manual cancels continue to function after MP is decommissioned.
Business Goal:
Maintain continuity of cancel notices while decoupling from the MP nightly schedule, allowing operational flexibility and independent scheduling.
Core Behavior:
Remove manual cancel logic from MP nightly job
Execute cancel process independently as a standalone job
Generate cancel notices that exactly match requested manual cancels
Replicate MP weekend and holiday behavior (as per C390)
Outcome:
Manual cancel requests continue to be processed reliably, producing accurate cancel notices even if MP nightly job is no longer active.
Chunk 2: Preconditions and Dependencies
Metadata:
storyId=IA-2691, type=preconditions, dependencies=MPNightly,C461,IINSE461,C390
Content:
Required Preconditions:
MP nightly job currently includes the manual cancel process
C461 program and IINSE461 logic are accessible for reuse
Weekend and holiday behavior must be mimicked (C390)
Dependencies:
Access to MP nightly job logic for verification
Spool files for validation of cancel notices
Potential creation of a new scheduled job for independent execution
Implicit Rule (Made Explicit):
Manual cancels must continue to execute correctly regardless of MP nightly status
Any changes to logic must maintain consistency with prior cancel notice behavior
Chunk 3: Functional Requirement - Manual Cancel Process
Metadata:
storyId=IA-2691, type=functional, documentType=ManualCancel
Content:
Required Fields (Normalized):
CancelRequestId
PolicyNumber
CancelNoticeDate
CancelReason
Purpose:
Represents manual cancel requests that must trigger cancel notices.
Implicit Rules (Made Explicit):
Each manual cancel request generates exactly one cancel notice
CancelNoticeDate represents the date notice is generated
CancelReason must be preserved for downstream reporting and decisioning
Weekend/holiday cancel logic (C390) must be applied consistently
Chunk 4: Workflow and Processing Flow
Metadata:
storyId=IA-2691, type=workflow, orchestration=manualCancel
Content:
Processing Flow:
Receive manual cancel requests
Execute cancel logic via standalone job (extracted from MP nightly)
Call C461, which invokes IINSE461
Generate cancel notices for each request
Output spool file for validation
Verify cancel notices match requested cancels
Key Rule:
Standalone cancel process must replicate MP nightly behavior, including weekends and holidays, without requiring MP job execution.
Chunk 5: Non-Functional Requirements
Metadata:
storyId=IA-2691, type=nonFunctional, category=processReliability
Content:
System Requirements:
Standalone job must reliably produce accurate cancel notices
Execution should be schedulable independently of MP nightly job
Process must not degrade existing system performance
Performance Consideration:
Must complete within acceptable operational window
Spool file validation must be efficient and precise
Chunk 6: External System Responsibilities
Metadata:
storyId=IA-2691, type=externalSystems, systems=MPNightly,C461,IINSE461
Content:
MPNightly Responsibilities:
Previously provided manual cancel execution logic
C461 Responsibilities:
Execute cancel process logic
Call IINSE461 for core cancel functionality
IINSE461 Responsibilities:
Generate cancel notices for each manual cancel request
Implicit Rule (Made Explicit):
Spool files act as primary validation mechanism for cancel notices
Standalone job leverages existing logic but becomes the primary executor of manual cancels
Chunk 7: Business Rules
Metadata:
storyId=IA-2691, type=businessRules, domain=manualCancelProcessing
Content:
Core Rules:
Each manual cancel request triggers one cancel notice
CancelNoticeDate must reflect actual notice generation date
Weekend and holiday rules (C390) must be applied to standalone execution
Standalone job must fully replicate MP nightly cancel behavior
Edge Cases:
MP nightly job is offline → manual cancels must still execute correctly
Multiple cancel requests for same policy → each request generates a separate notice
Discrepancy in cancel notice count → requires spool file validation
Chunk 8: Data Quality Assumptions and Risks
Metadata:
storyId=IA-2691, type=dataQuality, riskLevel=low
Content:
Assumptions:
Cancel requests are accurate and complete
C461/IINSE461 logic correctly reflects MP nightly behavior
Spool files reliably record all cancel notices
Risks:
Discrepancies between standalone job output and prior MP behavior
Missing or incorrect cancel requests may result in missing notices
Mitigation:
Validate output against MP nightly logic before full deployment
Use spool files to verify notice counts
Test weekend/holiday scenarios explicitly
Chunk 9: Search Queries Supported
Metadata:
storyId=IA-2691, type=queryPatterns, purpose=RAGRetrieval
Content:
This knowledge base supports queries such as:
"How is manual cancel process extracted from MP nightly?"
"Which programs handle manual cancel notices?"
"How are cancel notices validated?"
"How is C390 weekend/holiday behavior applied?"
"What fields are required for manual cancel requests?"
"How do standalone cancel jobs differ from MP nightly execution?"
"How are discrepancies in cancel notice counts handled?"
Chunk 10: Before vs After Architecture
Metadata:
storyId=IA-2691, type=architecture, domain=manualCancel, view=beforeAfter
Content:
Before:
Manual cancel logic was embedded in MP nightly job.
Execution depended on MP nightly schedule.
Weekend/holiday behavior (C390) applied implicitly within MP logic.
Cancel notices generated as part of broader MP nightly spool output.
After:
Manual cancel process extracted into a standalone job.
Independent scheduling possible outside MP nightly window.
C461/IINSE461 handle cancel execution logic directly.
Weekend/holiday behavior (C390) explicitly applied.
Spool file output used solely for manual cancel validation.
Impact:
Decouples manual cancel responsibility from MP nightly job.
Reduces dependency risk if MP nightly fails or is decommissioned.
Provides clear operational boundaries for monitoring and reruns.
Chunk 11: Operational Runbook Implications
Metadata:
storyId=IA-2691, type=operations, domain=manualCancel, category=runbook
Content:
Rerun Procedure:
Identify pending manual cancel requests.
Execute standalone manual cancel job.
Verify spool file output matches requested cancel count.
If discrepancies exist, rerun job after investigation.
Monitoring:
Track job execution status via scheduler logs.
Validate cancel notice count against request queue.
Compare output against historical MP nightly behavior for consistency.
Alerts:
Alert if job fails to complete or crashes.
Alert if cancel notice count ≠ cancel request count.
Alert for spool file generation issues.
Alert for failed weekend/holiday behavior logic (C390).
Key Operational Considerations:
Job can be scheduled independently of MP nightly batch window.
Spool files are primary source for verification; store and retain per SLA.
Include rollback or compensating actions if notices are incorrectly generated.
Document all manual reruns in operational logs for auditing purposes.
These two chunks complement the earlier 9 chunks perfectly: the architecture chunk visualizes the decoupling and the runbook chunk gives concrete operational guidance for execution, monitoring, and alerting.
If you want, I can
also generate a compact visual diagram for “Before vs After Architecture” to include in the KB—its highly useful for onboarding and RAG retrieval.

View File

@@ -0,0 +1,208 @@
JIRA Story - RAG Knowledge Base
Accounting IA-2698: Reshuffle S3 Import Workflow
Chunk 1: Overview
Metadata:
storyId=IA-2698, type=overview, domain=fileProcessing, workflow=s3Import
Content:
Purpose: Refactors the S3 file import workflow by moving file reading and processing logic out of the Lambda trigger and into a downstream Step Function.
Business Goal:
Improve traceability, maintainability, and failure recovery by centralizing processing logic within Step Functions instead of embedding it in the S3-triggered Lambda.
Core Change:
Current: Lambda triggered by S3 both receives AND processes file
New: Lambda triggered by S3 only initiates workflow, processing occurs downstream
Outcome:
Better observability of processing steps
Ability to retry/reprocess failures via Step Function
Cleaner separation of concerns
Chunk 2: Preconditions and Dependencies
Metadata:
storyId=IA-2698, type=preconditions, dependencies=S3,StepFunctions
Content:
Required Preconditions:
S3 bucket configured to trigger Lambda on file upload
Existing Step Function capable of handling file processing logic
Dependencies:
AWS S3 event notification system
Lambda function (S3 trigger handler)
AWS Step Functions orchestration
Implicit Rule (Made Explicit):
File ingestion must still trigger the workflow exactly once per upload
Existing processing logic must be migrated fully to Step Function
Chunk 3: Functional Requirement - S3 Trigger Behavior
Metadata:
storyId=IA-2698, type=functional, component=s3TriggerLambda
Content:
New Behavior of S3 Triggered Lambda:
Receives S3 event (file upload notification)
Extracts file metadata (e.g., BucketName, ObjectKey)
Initiates Step Function execution
Passes file reference (not file contents) downstream
Explicit Rule:
Lambda must NOT read or process file contents
Implicit Rule (Made Explicit):
Lambda becomes a lightweight orchestration trigger only
Reduces Lambda execution time and complexity
Chunk 4: Functional Requirement - Step Function Processing
Metadata:
storyId=IA-2698, type=functional, component=stepFunction
Content:
Step Function Responsibilities:
Receive file reference from Lambda
Perform file reading from S3
Execute all parsing and processing logic
Handle downstream workflows dependent on file contents
Processing Capabilities:
Retry failed steps
Track execution state
Enable partial or full reprocessing
Implicit Rule (Made Explicit):
All business logic previously in Lambda must be relocated here
Step Function becomes the single source of truth for processing flow
Chunk 5: Workflow and Processing Flow
Metadata:
storyId=IA-2698, type=workflow, orchestration=stepFunctions
Content:
Updated Processing Flow:
File uploaded to S3 bucket
S3 event triggers Lambda
Lambda extracts file metadata (BucketName, ObjectKey)
Lambda invokes Step Function execution
Step Function retrieves file from S3
Step Function processes file contents
Downstream processing steps executed within workflow
Key Change:
File processing is decoupled from trigger event
Chunk 6: Non-Functional Requirements
Metadata:
storyId=IA-2698, type=nonFunctional, category=architecture
Content:
System Improvements:
Traceability: Step Function execution history provides full audit trail
Maintainability: Business logic centralized and easier to modify
Resiliency: Failed executions can be retried without re-uploading file
Performance Consideration:
Slight increase in orchestration overhead
Reduced Lambda execution burden
Chunk 7: External System Responsibilities
Metadata:
storyId=IA-2698, type=externalSystems, systems=AWS
Content:
AWS S3:
Stores uploaded files
Triggers Lambda on file creation events
AWS Lambda (Trigger Layer):
Receives S3 event
Initiates Step Function execution
AWS Step Functions:
Orchestrates full file processing workflow
Handles retries, state tracking, and execution visibility
Implicit Rule (Made Explicit):
No external system reads file directly except Step Function workflow
Chunk 8: Business Rules
Metadata:
storyId=IA-2698, type=businessRules, domain=fileProcessing
Content:
Core Rules:
File processing must not occur inside S3-triggered Lambda
Step Function must handle all file reading and processing
File reference (not content) must be passed between components
Workflow must remain automatically triggered upon file upload
Acceptance Criteria Rule:
Step Function execution must still trigger after S3 import (regression requirement)
Edge Case Handling:
If Step Function fails → workflow can be retried without re-uploading file
If Lambda fails → file may require re-trigger or retry mechanism
Chunk 9: Data Quality Assumptions and Risks
Metadata:
storyId=IA-2698, type=dataQuality, riskLevel=low
Content:
Assumptions:
S3 event reliably delivers correct file metadata
Step Function has access to correct S3 permissions
File format and structure remain unchanged
Risks:
Misconfigured event payload → Step Function receives invalid file reference
Migration gaps → logic not fully moved from Lambda
Increased dependency on Step Function availability
Mitigation:
Regression testing ensures workflow triggers correctly
Validation of input payload before Step Function execution
Chunk 10: Search Queries Supported
Metadata:
storyId=IA-2698, type=queryPatterns, purpose=RAGRetrieval
Content:
This knowledge base supports queries such as:
"How is S3 file processing handled after this change?"
"Why move file processing from Lambda to Step Function?"
"What does the S3 trigger Lambda do now?"
"Where is file parsing logic executed?"
"How do Step Functions improve file processing reliability?"
"Can file processing be retried without re-uploading?"
"What data is passed from Lambda to Step Function?"
"What are the benefits of this architecture change?"
"What happens when an S3 file is uploaded?"
"What regression requirement exists for S3 workflow?"

View File

@@ -0,0 +1,236 @@
JIRA Story - RAG Knowledge Base
Accounting IA-2827: Preload iSeries Data for Pending Cancel and Billing Statements
Chunk 1: Overview
Metadata:
storyId=IA-2827, type=overview, domain=dataPreload, workflow=nightlyProcessing
Content:
Purpose: Preloads iSeries-derived data into AWS data store for Pending Cancel and Billing Statement documents to support downstream nightly processing.
Business Goal:
Ensure required historical and current data is available in AWS so that nightly processes can accurately evaluate policy state and prior notifications.
Core Behavior:
Extract data from upstream systems (iSeries and/or EDW)
Store normalized data in AWS data store
Enable lookup of prior billing and cancellation events during nightly checks
Outcome:
Nightly workflows can reliably determine whether billing statements or pending cancellation notices have been previously issued.
Chunk 2: Preconditions and Dependencies
Metadata:
storyId=IA-2827, type=preconditions, dependencies=iSeries,EDW,Oracle
Content:
Required Preconditions:
Data must be extracted from iSeries and/or EDW
Payment schedule data may include Oracle identifiers (needs confirmation)
Dependencies:
iSeries as primary source of policy and installment data
EDW as potential supplemental data source
AWS data store for persistence
Nightly processing workflows consuming preloaded data
Implicit Rule (Made Explicit):
If Oracle identifiers are not present, downstream processes must still function using available keys (e.g., AccountIdentifier, PolicyNumber)
Chunk 3: Functional Requirement - Pending Cancel Document
Metadata:
storyId=IA-2827, type=functional, documentType=PendingCancel
Content:
Required Fields (Normalized):
AccountIdentifier
PrimaryCancellationReason
CancellationNoticeDate
CancellationEffectiveDate
PolicyNumber
Purpose:
Represents policies that are at risk of cancellation and have been issued a pending cancellation notice.
Implicit Rules (Made Explicit):
Presence of record implies a pending cancellation notice has been sent
CancellationEffectiveDate represents the future termination date if no action taken
PrimaryCancellationReason must be preserved for downstream decisioning
Chunk 4: Functional Requirement - Billing Statement Document
Metadata:
storyId=IA-2827, type=functional, documentType=BillingStatement
Content:
Required Fields (Normalized):
AccountIdentifier
PaymentDueDate
StatementType
PolicyNumber
Purpose:
Represents billing statements issued to customers for payment collection.
Implicit Rules (Made Explicit):
Presence of record implies a billing statement has been generated/sent
PaymentDueDate is used for delinquency and cancellation evaluation
StatementType differentiates billing scenarios (e.g., regular, special)
Chunk 5: Functional Requirement - Installment and Notice Tracking
Metadata:
storyId=IA-2827, type=functional, concept=installmentTracking
Content:
Required Output Fields:
HeaderNumber
PaymentNumber (InstallmentNumber)
BillingNoticeSent (Y/N)
PendingCancelNoticeSent (Y/N)
PolicyNumber (12 digits)
AccountKey
Core Logic:
Identify the highest active PaymentNumber (InstallmentNumber)
Determine whether:
Billing statement has been sent
Pending cancellation notice has been sent
Implicit Rules (Made Explicit):
“Highest active” implies latest installment that is still open/active
Flags (Y/N) are derived from presence of corresponding document records
PolicyNumber must be normalized to 12-digit format
Chunk 6: Workflow and Processing Flow
Metadata:
storyId=IA-2827, type=workflow, orchestration=dataPreload
Content:
Processing Flow:
Extract data from iSeries and/or EDW
Normalize and map fields to AWS data model
Identify latest active installment per PolicyNumber
Evaluate presence of:
Billing statements
Pending cancel notices
Derive indicator flags (BillingNoticeSent, PendingCancelNoticeSent)
Store enriched records in AWS data store
Nightly process queries this data for decisioning
Key Rule:
Preloaded data must be available before nightly process execution
Chunk 7: Non-Functional Requirements
Metadata:
storyId=IA-2827, type=nonFunctional, category=dataAvailability
Content:
System Requirements:
Data must be preloaded prior to nightly batch execution
Data retrieval and storage must be reliable and repeatable
System should support incremental updates as new installments are generated
Performance Consideration:
Preload process must complete within batch window
Efficient lookup required for nightly processing queries
Chunk 8: External System Responsibilities
Metadata:
storyId=IA-2827, type=externalSystems, systems=iSeries,EDW,AWS
Content:
iSeries Responsibilities:
Provide PolicyNumber, InstallmentNumber (PaymentNumber), document indicators
Provide core billing and cancellation data
EDW Responsibilities:
Supplement missing or derived data fields (if applicable)
AWS Responsibilities:
Store normalized data
Provide query access for nightly processing
Oracle (Conditional):
May provide identifiers within payment schedule data (if available)
Implicit Rule (Made Explicit):
AWS acts as the system of aggregation and lookup, not the source of truth
Chunk 9: Business Rules
Metadata:
storyId=IA-2827, type=businessRules, domain=policyProcessing
Content:
Core Rules:
Highest active installment must be identified per PolicyNumber
Billing and cancellation indicators derived from document presence
PolicyNumber must be standardized to 12 digits
AccountIdentifier and AccountKey must consistently map across systems
Edge Cases:
Missing Oracle identifier → processing continues using available keys
Multiple documents → only latest active installment considered
No documents present → both flags set to "N"
Chunk 10: Data Quality Assumptions and Risks
Metadata:
storyId=IA-2827, type=dataQuality, riskLevel=low
Content:
Assumptions:
iSeries data is accurate and up-to-date
Installment numbering correctly reflects payment sequence
Document generation events are reliably recorded
Risks:
Missing or inconsistent AccountIdentifier across systems
Incorrect identification of “highest active” installment
Incomplete document history leading to incorrect flags
Mitigation:
Validate data completeness during preload
Normalize identifiers consistently
Cross-check installment sequencing logic
Chunk 11: Search Queries Supported
Metadata:
storyId=IA-2827, type=queryPatterns, purpose=RAGRetrieval
Content:
This knowledge base supports queries such as:
"How is iSeries data preloaded for billing and cancellation processing?"
"What fields are required for Pending Cancel documents?"
"What data is stored for billing statements?"
"How is the highest active installment determined?"
"How are billing and cancellation notice flags calculated?"
"What data does nightly processing rely on?"
"What happens if Oracle identifiers are missing?"
"How are PolicyNumbers normalized?"
"What systems provide data for preload processing?"
"How do we determine if a billing statement was sent?"

View File

@@ -0,0 +1,186 @@
JIRA Story - RAG Knowledge Base
Accounting IA-2852: Payment Import Response from Oracle
Chunk 1: Overview
Metadata:
storyId=IA-2852, type=overview, domain=paymentProcessing, workflow=batchSubmission
Content:
Purpose:
Capture and store responses from Oracle after payment batches are submitted, confirming success or failure of each payment.
Business Goal:
Provide accounting and customer service teams with reliable confirmation of payment batch outcomes to support reconciliation, audit, and customer inquiries.
Core Behavior:
Receive response from Oracle for each submitted payment batch
Store response data in S3 audit bucket
Include both successful and failed payment notifications
Outcome:
Accounting team can confirm which payments succeeded or failed
Customer service can take timely action on failed payments
Audit trail maintained in S3 for compliance and reporting
Chunk 2: Preconditions and Dependencies
Metadata:
storyId=IA-2852, type=preconditions, dependencies=ERP,Oracle,S3
Content:
Required Preconditions:
Payment batch submission process must be operational
Oracle ERP system configured to send responses to the designated endpoint
S3 audit bucket available for storing responses
Dependencies:
ERP must successfully submit payment batches
Oracle must send responses using the correct URL (update noted 4/7)
S3 bucket access permissions must allow write operations from the ingestion process
Implicit Rule (Made Explicit):
Responses cannot be processed or stored if batch submission fails
ERP and Oracle systems must use the same endpoint URL to avoid 403 errors
Chunk 3: Functional Requirements - Response Capture
Metadata:
storyId=IA-2852, type=functional, documentType=PaymentResponse
Content:
Required Fields (Normalized):
PaymentBatchId
PaymentId
Status (Success / Failure)
Timestamp
ErrorMessage (if any)
Purpose:
Represents the response from Oracle confirming the status of each payment in the batch.
Implicit Rules (Made Explicit):
Each PaymentId in a batch must have a corresponding response record
Status field determines whether downstream reconciliation is required
ErrorMessage is required for failed payments to enable troubleshooting
Chunk 4: Non-Functional / Technical Requirements
Metadata:
storyId=IA-2852, type=nonFunctional, category=technicalIntegration
Content:
System Requirements:
Responses must be stored reliably in S3 for audit purposes
Endpoint URL must be consistent with other ERP events to prevent 403 errors
Process should handle both high-volume and low-volume payment batches
Performance Consideration:
Capture and storage should occur in near-real-time after Oracle sends the response
System must gracefully handle retries in case of network or ERP errors
Chunk 5: External System Responsibilities
Metadata:
storyId=IA-2852, type=externalSystems, systems=Oracle,ERP,S3
Content:
Oracle ERP Responsibilities:
Send success/failure responses for each payment in submitted batch
Use correct endpoint URL for responses
S3 Responsibilities:
Store response data as audit records
Ensure records are accessible for reconciliation and reporting
ERP Responsibilities:
Submit payment batch
Use consistent endpoint URL to avoid 403 errors
Implicit Rule (Made Explicit):
ERP and Oracle URL must match configured endpoint to prevent communication errors
S3 is the authoritative storage for audit trail, not Oracle logs
Chunk 6: Workflow / Processing Flow
Metadata:
storyId=IA-2852, type=workflow, orchestration=responseCapture
Content:
Processing Flow:
Submit payment batch to Oracle ERP
Oracle sends response for each PaymentId in the batch
Capture response (success/failure)
Normalize response fields (PaymentBatchId, PaymentId, Status, Timestamp, ErrorMessage)
Store response records in S3 audit bucket
Monitor for any missing responses or storage failures
Key Rule:
All payments in a batch must have a response recorded in S3 before batch is considered complete
Chunk 7: Business Rules
Metadata:
storyId=IA-2852, type=businessRules, domain=paymentProcessing
Content:
Core Rules:
Each PaymentId must have a corresponding Oracle response
Successful payments require no further action
Failed payments must include ErrorMessage for troubleshooting and corrective actions
S3 audit bucket acts as authoritative record for batch response
Edge Cases:
ERP receives 403 error → response not captured; rerun required after URL fix
Partial responses → system flags missing PaymentIds for follow-up
Duplicate responses → ensure idempotent storage in S3
Chunk 8: Data Quality Assumptions and Risks
Metadata:
storyId=IA-2852, type=dataQuality, riskLevel=low
Content:
Assumptions:
Oracle responses are complete and accurate
ERP submits batches successfully
S3 bucket permissions allow reliable writes
Risks:
Incorrect or missing PaymentId in response → reconciliation errors
403 errors due to incorrect URL prevent capture of responses
Network or permission failures during S3 write
Mitigation:
Validate URL configuration with ERP and Oracle
Implement retries and error logging for failed writes
Monitor completeness of captured responses against submitted batch
Chunk 9: Search Queries Supported
Metadata:
storyId=IA-2852, type=queryPatterns, purpose=RAGRetrieval
Content:
This knowledge base supports queries such as:
"How are Oracle payment responses captured after batch submission?"
"What fields are stored for each payment response?"
"Where are payment response audit files stored?"
"How are failed payments identified and tracked?"
"What happens if Oracle or ERP returns a 403 error?"
"How is S3 used for payment batch audit storage?"
"How is the response workflow processed for each batch?"

View File

@@ -0,0 +1,189 @@
JIRA Story - RAG Knowledge Base
Accounting IA-2854: Add Transaction ID & Installment # to be Stored for Pending Cancel
Chunk 1: Overview
Metadata:
storyId=IA-2854, type=overview, domain=pendingCancel, workflow=dataStorage
Content:
Purpose:
Capture TransactionId and InstallmentNumber for each Pending Cancel record in AWS to maintain traceability to the source transaction invoice.
Business Goal:
Enable precise identification and reconciliation of Pending Cancel notices with their originating transactions, ensuring accurate processing, filtering, and reporting.
Core Behavior:
Store TransactionId and InstallmentNumber in AWS DynamoDB for each Pending Cancel
Allow filtering by PolicyNumber, TransactionId, and InstallmentNumber
Retrieve the latest transaction as the active invoice during processing
Outcome:
Pending Cancel records are fully traceable to their source transactions
Step Functions and Lambdas can accurately process, filter, and reconcile Pending Cancel data
Chunk 2: Preconditions and Dependencies
Metadata:
storyId=IA-2854, type=preconditions, dependencies=AWS,DynamoDB,iSeries,Lambda,StepFunctions
Content:
Required Preconditions:
AWS DynamoDB table for Pending Cancel data already exists
iSeries data flow for Pending Cancel records is operational
Step Functions and Lambdas for processing Pending Cancel data are deployed
Dependencies:
DynamoDB schema must support TransactionId and InstallmentNumber fields
Step Functions must consume Pending Cancel data with the new fields
Lambdas must filter and retrieve records based on PolicyNumber, TransactionId, and InstallmentNumber
Implicit Rule (Made Explicit):
Without DynamoDB table availability or schema updates, new fields cannot be captured
Step Functions and Lambdas must be updated in tandem with schema changes
Chunk 3: Functional Requirements - Pending Cancel Data
Metadata:
storyId=IA-2854, type=functional, documentType=PendingCancel
Content:
Required Fields (Normalized):
PolicyNumber
TransactionId
InstallmentNumber
PendingCancelNoticeDate
Status (Active/Processed)
Purpose:
Enhance Pending Cancel data model to enable traceability to original transactions and invoices.
Implicit Rules (Made Explicit):
PolicyNumber is primary key for retrieval
TransactionId represents the source invoice
InstallmentNumber identifies the specific installment linked to the Pending Cancel
Latest TransactionId per PolicyNumber is considered active for processing
Chunk 4: Workflow / Processing Flow
Metadata:
storyId=IA-2854, type=workflow, orchestration=pendingCancelProcessing
Content:
Processing Flow:
Receive Pending Cancel data from iSeries
Use PolicyNumber to query existing DynamoDB records
Store TransactionId and InstallmentNumber alongside Pending Cancel record
Step Functions and Lambdas filter records by PolicyNumber, TransactionId, and InstallmentNumber
During reconciliation, use the latest TransactionId as the active invoice
Generate notices or downstream triggers based on active Pending Cancel records
Key Rule:
Only the latest TransactionId per PolicyNumber is considered active; older transactions are ignored for processing
Chunk 5: Non-Functional / Technical Requirements
Metadata:
storyId=IA-2854, type=nonFunctional, category=performanceReliability
Content:
System Requirements:
DynamoDB writes must be atomic and consistent to prevent mismatches
Step Functions and Lambdas must process and filter data with minimal latency
Ensure backward compatibility with existing Pending Cancel processing
Performance Consideration:
High-volume Pending Cancel loads must not degrade Lambda performance
DynamoDB queries by PolicyNumber, TransactionId, InstallmentNumber must be efficient
Chunk 6: External System Responsibilities
Metadata:
storyId=IA-2854, type=externalSystems, systems=iSeries,AWS,DynamoDB,Lambda,StepFunctions
Content:
iSeries Responsibilities:
Provide TransactionId, InstallmentNumber, PolicyNumber, and Pending Cancel records
AWS / DynamoDB Responsibilities:
Store Pending Cancel records with new TransactionId and InstallmentNumber fields
Provide query capabilities for filtering and retrieval
Lambda Responsibilities:
Filter, retrieve, and process Pending Cancel records using PolicyNumber, TransactionId, and InstallmentNumber
Step Functions Responsibilities:
Orchestrate processing of Pending Cancel records through multiple Lambdas
Ensure active TransactionId is selected for downstream actions
Implicit Rule (Made Explicit):
AWS/DynamoDB acts as the authoritative data store for Pending Cancel records
Step Functions and Lambdas must respect the active TransactionId logic
Chunk 7: Business Rules
Metadata:
storyId=IA-2854, type=businessRules, domain=pendingCancelProcessing
Content:
Core Rules:
Each Pending Cancel record must store TransactionId and InstallmentNumber
Latest TransactionId per PolicyNumber is active for processing
Step Functions and Lambdas filter based on PolicyNumber, TransactionId, and InstallmentNumber
Records without TransactionId or InstallmentNumber are considered incomplete and flagged
Edge Cases:
Multiple Pending Cancels for same PolicyNumber → only the latest TransactionId is active
Missing TransactionId → record cannot be reconciled
InstallmentNumber mismatch → flag for review
Chunk 8: Data Quality Assumptions and Risks
Metadata:
storyId=IA-2854, type=dataQuality, riskLevel=low
Content:
Assumptions:
iSeries provides complete TransactionId and InstallmentNumber data
DynamoDB writes are successful and consistent
Step Functions and Lambdas correctly implement filtering logic
Risks:
Missing or incorrect TransactionId may lead to incorrect Pending Cancel mapping
Inconsistent InstallmentNumber could cause reconciliation errors
Failure in Lambda processing could skip active TransactionId selection
Mitigation:
Validate iSeries data completeness prior to storage
Implement idempotent DynamoDB writes
Monitor Step Function and Lambda execution logs for failures
Chunk 9: Search Queries Supported
Metadata:
storyId=IA-2854, type=queryPatterns, purpose=RAGRetrieval
Content:
This knowledge base supports queries such as:
"How is TransactionId stored for Pending Cancel records?"
"How do Step Functions filter Pending Cancel data?"
"How is the latest TransactionId determined per PolicyNumber?"
"What fields are required for Pending Cancel processing?"
"How are InstallmentNumber and TransactionId used in reconciliation?"
"Which AWS components handle Pending Cancel storage and filtering?"
"What happens if TransactionId or InstallmentNumber is missing?"

View File

@@ -0,0 +1,190 @@
JIRA Story - RAG Knowledge Base
Accounting IA-2855: Add Transaction ID and Installment # to be Stored for Billing Statements
Chunk 1: Overview
Metadata:
storyId=IA-2855, type=overview, domain=billingStatement, workflow=dataStorage
Content:
Purpose:
Enhance the Billing Statement data model in AWS to include TransactionId and InstallmentNumber, ensuring each statement is traceable to its source transaction invoice.
Business Goal:
Enable accurate reconciliation and processing of Billing Statements by linking them to the originating transaction invoice, allowing step functions and lambdas to filter and process data effectively.
Core Behavior:
Store TransactionId and InstallmentNumber in AWS DynamoDB Billing Statement table
Allow filtering by PolicyNumber, TransactionId, and InstallmentNumber
Treat latest TransactionId as the active invoice during processing
Outcome:
Billing Statement records are fully traceable to source transactions
Step Functions and Lambdas can reliably filter and reconcile Billing Statement data
Supports consistent downstream processing and reporting
Chunk 2: Preconditions and Dependencies
Metadata:
storyId=IA-2855, type=preconditions, dependencies=AWS,DynamoDB,iSeries,Lambda,StepFunctions
Content:
Required Preconditions:
AWS DynamoDB Billing Statement table exists
Step Functions and Lambdas for Billing Statement processing are deployed
iSeries data flow for Billing Statements is operational
Dependencies:
DynamoDB schema must support TransactionId and InstallmentNumber attributes
Step Functions and Lambdas must be updated to filter by PolicyNumber, TransactionId, and InstallmentNumber
Latest TransactionId per PolicyNumber must be treated as active
Implicit Rule (Made Explicit):
Processing cannot accurately filter or reconcile Billing Statements without the new attributes
iSeries will not send TransactionId back to AWS; system must use previously stored data
Chunk 3: Functional Requirements - Billing Statement Data
Metadata:
storyId=IA-2855, type=functional, documentType=BillingStatement
Content:
Required Fields (Normalized):
PolicyNumber
TransactionId
InstallmentNumber
BillingStatementDate
Status (Active/Processed)
Purpose:
Enhance Billing Statement records to maintain traceability to the originating transaction invoice.
Implicit Rules (Made Explicit):
PolicyNumber remains the primary key
TransactionId represents the source transaction invoice
InstallmentNumber identifies the specific installment related to the Billing Statement
Latest TransactionId per PolicyNumber is treated as the active invoice
Chunk 4: Workflow / Processing Flow
Metadata:
storyId=IA-2855, type=workflow, orchestration=billingStatementProcessing
Content:
Processing Flow:
Receive Billing Statement data from iSeries
Query DynamoDB using PolicyNumber
Store or update TransactionId and InstallmentNumber attributes
Step Functions and Lambdas filter records using PolicyNumber, TransactionId, and InstallmentNumber
Select the latest TransactionId as the active invoice for downstream processing
Generate billing notices or trigger downstream workflows based on active invoices
Key Rule:
Only the latest TransactionId per PolicyNumber is considered active; older transactions are ignored
Chunk 5: Non-Functional / Technical Requirements
Metadata:
storyId=IA-2855, type=nonFunctional, category=performanceReliability
Content:
System Requirements:
DynamoDB writes must be atomic and consistent
Step Functions and Lambdas must efficiently filter and process data
Existing primary key (PolicyNumber) must remain unchanged
Performance Consideration:
High-volume Billing Statement processing must not degrade Lambda performance
Queries by PolicyNumber, TransactionId, and InstallmentNumber must be efficient
Chunk 6: External System Responsibilities
Metadata:
storyId=IA-2855, type=externalSystems, systems=iSeries,AWS,DynamoDB,Lambda,StepFunctions
Content:
iSeries Responsibilities:
Provide Billing Statement records with PolicyNumber
TransactionId is not sent back; system must maintain mapping
AWS / DynamoDB Responsibilities:
Store Billing Statement records with TransactionId and InstallmentNumber attributes
Allow filtering and retrieval based on PolicyNumber, TransactionId, and InstallmentNumber
Lambda Responsibilities:
Filter, retrieve, and process Billing Statement records based on new attributes
Step Functions Responsibilities:
Orchestrate processing of Billing Statement records
Ensure active TransactionId is correctly selected for downstream workflows
Implicit Rule (Made Explicit):
AWS/DynamoDB is the authoritative source for TransactionId mapping since iSeries does not return it
Chunk 7: Business Rules
Metadata:
storyId=IA-2855, type=businessRules, domain=billingStatementProcessing
Content:
Core Rules:
Each Billing Statement must store TransactionId and InstallmentNumber
Latest TransactionId per PolicyNumber is active for processing
Step Functions and Lambdas must filter using PolicyNumber, TransactionId, and InstallmentNumber
Billing Statement records without TransactionId or InstallmentNumber are incomplete and flagged
Edge Cases:
Multiple Billing Statements for same PolicyNumber → only latest TransactionId is active
Missing TransactionId → reconciliation cannot proceed
InstallmentNumber mismatch → flag for review
Chunk 8: Data Quality Assumptions and Risks
Metadata:
storyId=IA-2855, type=dataQuality, riskLevel=low
Content:
Assumptions:
iSeries provides accurate Billing Statement data
DynamoDB writes succeed consistently
Step Functions and Lambdas correctly implement filtering logic
Risks:
Missing or incorrect TransactionId leads to reconciliation errors
InstallmentNumber inconsistencies cause mismatches
Lambda or Step Function failures may skip active TransactionId selection
Mitigation:
Validate completeness of Billing Statement records before storage
Implement idempotent writes in DynamoDB
Monitor Step Function and Lambda execution logs for failures
Chunk 9: Search Queries Supported
Metadata:
storyId=IA-2855, type=queryPatterns, purpose=RAGRetrieval
Content:
This knowledge base supports queries such as:
"How is TransactionId stored for Billing Statement records?"
"How do Step Functions filter Billing Statement data?"
"How is the latest TransactionId determined per PolicyNumber?"
"What fields are required for Billing Statement processing?"
"How are InstallmentNumber and TransactionId used in reconciliation?"
"Which AWS components handle Billing Statement storage and filtering?"
"What happens if TransactionId is missing from iSeries data?"

View File

@@ -0,0 +1,186 @@
JIRA Story - RAG Knowledge Base
Accounting IA-2858: Payment Plan Change Processing
Chunk 1: Overview
Metadata:
storyId=IA-2858, type=overview, domain=payments, workflow=planChangeProcessing
Content:
Purpose: Ensures customer payment plan changes are processed before payment application to maintain accurate invoice allocation in Oracle.
Business Goal:
Prevent misapplication of payments by sequencing plan change operations ahead of payment batching and cash receipt processing.
Core Behavior:
Detect payment plan change requirement
Execute plan change workflow first
Then process payment against updated invoice structure
Outcome:
Payments are applied to the correct (new) invoice after plan restructuring.
Chunk 2: Preconditions and Dependencies
Metadata:
storyId=IA-2858, type=preconditions, dependencies=planChangeDetection
Content:
Required Preconditions:
System must identify that a payment plan change is required prior to payment processing
Detection logic exists upstream (outside this story scope)
Dependencies:
Plan change identification logic (external or prior step)
Step Function orchestration for sequencing workflows
Oracle Accounts Receivable for invoice and payment application
Implicit Rule (Made Explicit):
If plan change is not detected, standard payment processing proceeds unchanged.
Chunk 3: Functional Requirement - Payment Handling Rules
Metadata:
storyId=IA-2858, type=functional, domain=paymentProcessing
Content:
Payment Processing Behavior When Plan Change Exists:
Payment is processed without sending InvoiceNumber to Oracle
Oracle creates a cash receipt only (no application to invoice)
Required Fields:
OracleCustomerNumber must always be provided
PO Number must NOT be provided when plan change occurs
Implicit Rule (Made Explicit):
Absence of InvoiceNumber intentionally prevents auto-application
This ensures payment is temporarily unapplied until new invoice exists
Chunk 4: Functional Requirement - Post Plan Change Application
Metadata:
storyId=IA-2858, type=functional, domain=invoiceApplication
Content:
Post-Plan Change Behavior:
Plan change creates a new invoice
Existing system functionality automatically applies previously created cash receipt to the new invoice
Implicit Rule (Made Explicit):
No manual reconciliation required
System relies on Oracle auto-application logic after invoice creation
Chunk 5: Workflow and Processing Sequence
Metadata:
storyId=IA-2858, type=workflow, orchestration=stepFunctions
Content:
Processing Flow:
Detect payment plan change requirement
Execute Plan Change Step Function
Plan change creates new invoice structure
Pass control to Cash Receipt Step Function
Include flag indicating whether plan change occurred
Process payment:
If plan change = true → create unapplied cash receipt
If plan change = false → normal invoice application
Oracle auto-applies receipt after invoice creation (if applicable)
Key Rule:
Plan change processing must occur before any payment batching or cash receipt creation.
Chunk 6: Non-Functional Requirements
Metadata:
storyId=IA-2858, type=nonFunctional, category=orchestration
Content:
System Requirements:
Step Function orchestration enforces strict execution order
Plan change workflow must complete before payment workflow begins
System must pass state flag between workflows indicating plan change occurrence
Performance Consideration:
Sequential dependency may introduce latency but ensures correctness
Chunk 7: External System Responsibilities
Metadata:
storyId=IA-2858, type=externalSystems, systems=Oracle,AWS
Content:
AWS Responsibilities:
Orchestrate workflows using Step Functions
Detect and propagate plan change flag
Control sequencing of operations
Oracle Responsibilities:
Create cash receipt when InvoiceNumber is not provided
Auto-apply payment once new invoice is created
Implicit Rule (Made Explicit):
Oracle behavior is leveraged intentionally (not overridden) to handle delayed application.
Chunk 8: Business Rules
Metadata:
storyId=IA-2858, type=businessRules, domain=payments
Content:
Core Rules:
Plan change must precede payment application
InvoiceNumber must be omitted when plan change occurs
OracleCustomerNumber is always required
Payments initially remain unapplied when plan change is in progress
System must rely on Oracle auto-application after invoice creation
Edge Case Handling:
If plan change flag is incorrect → risk of misapplied payment
If invoice created after payment → Oracle auto-application resolves linkage
Chunk 9: Data Quality Assumptions and Risks
Metadata:
storyId=IA-2858, type=dataQuality, riskLevel=low
Content:
Assumptions:
Plan change detection is accurate and reliable
Oracle auto-application logic behaves consistently
OracleCustomerNumber is always available
Risks:
Missing or incorrect plan change flag → incorrect payment application
Timing issues between invoice creation and receipt processing
Dependency on Oracle auto-apply behavior introduces external coupling
Chunk 10: Search Queries Supported
Metadata:
storyId=IA-2858, type=queryPatterns, purpose=RAGRetrieval
Content:
This knowledge base supports queries such as:
"How are payments handled when a payment plan changes?"
"Why is InvoiceNumber not sent to Oracle during plan change?"
"What happens to payments before a new invoice is created?"
"How does Oracle apply payments after a plan change?"
"What is the workflow for payment plan change processing?"
"What flag indicates a plan change occurred?"
"How are unapplied cash receipts created?"
"What are the sequencing requirements for plan change vs payment processing?"
"What fields are required when processing payments with plan changes?"
"What risks exist if plan change detection fails?"

View File

@@ -0,0 +1,149 @@
IRA Story IA-2866 - RAG Knowledge Base
Chunk 1: Story Overview
Metadata: module=JIRA, storyId=IA-2866, type=overview, domain=dataPreloading, system=AWS
Purpose:
Preloads iSeries-derived data into AWS data store to support downstream nightly processing for pending cancellations, cancellations, and billing statements.
Business Goal:
Ensure required historical and transactional data is available so nightly processes can successfully validate and retrieve prior information.
Key Concepts:
data preloading, AWS data store, iSeries integration, billing statements, cancellation processing, nightly validation workflows.
Chunk 2: Preconditions
Metadata: storyId=IA-2866, type=preconditions, dependencies=externalSystems
Requirements:
Data must be extracted from iSeries and/or EDW
Oracle identifier may exist on payment schedule file (requires confirmation)
Purpose:
Defines required upstream data availability before AWS preloading can occur.
Chunk 3: Pending Cancel Document Requirements
Metadata: storyId=IA-2866, type=requirements, documentType=pendingCancel
Required Fields:
AccountIdentifier
PrimaryCancellationReason
CancellationNoticeDate
CancellationEffectiveDate
PolicyNumber
TransactionId
InstallmentNumber
Purpose:
Defines data required to construct pending cancellation records in AWS.
Chunk 4: Billing Statement Document Requirements
Metadata: storyId=IA-2866, type=requirements, documentType=billingStatement
Required Fields:
AccountIdentifier
PaymentDueDate
StatementType
PolicyNumber
TransactionId
InstallmentNumber
Purpose:
Defines data required for billing statement records used in downstream processing.
Chunk 5: Cancellation Requirements
Metadata: storyId=IA-2866, type=requirements, documentType=cancellation
Required Fields:
AccountIdentifier
CancellationDate
PolicyNumber
Purpose:
Defines minimal dataset required to represent completed cancellations.
Chunk 6: External System Responsibilities
Metadata: storyId=IA-2866, type=integration, systems=iSeries,AWS,Oracle
iSeries Responsibilities:
Provides PolicyNumber
Provides InstallmentNumber
Provides document type information
AWS Responsibilities:
Fetch missing attributes from iSeries APIs
Fetch additional data from Oracle systems
Oracle Role:
Supplies supplemental identifiers and attributes not available in iSeries
Chunk 7: Installment Handling Rules
Metadata: storyId=IA-2866, type=businessRules, domain=installments
Rules:
iSeries sends only the latest InstallmentNumber
If InstallmentNumber > 1:
System must pre-populate Installment 1 data
Ensures billing statement completeness
Purpose:
Ensures correct historical representation of billing data when only partial installment data is provided.
Chunk 8: Data Preloading Workflow
Metadata: storyId=IA-2866, type=workflow, stage=preprocessing
Flow:
Extract data from iSeries and/or EDW
Validate availability of required identifiers
Enrich missing data via AWS calls to iSeries APIs and Oracle
Apply installment handling rules
Store processed data in AWS data store
Make data available for nightly processing
Purpose:
Describes end-to-end data preparation process.
Chunk 9: Data Quality and Assumptions
Metadata: storyId=IA-2866, type=dataQuality, validation=assumed
Assumptions:
Required fields are available from upstream systems
Oracle identifiers may require validation
Installment data may be incomplete and requires augmentation
Risks:
Missing Oracle identifiers could impact downstream linking
Incomplete installment data without rule handling could break billing logic
Chunk 10: Search Queries Supported
Metadata: storyId=IA-2866, type=queryPatterns, purpose=RAGRetrieval
Examples:
What data is required for pending cancel documents?
What fields are needed for billing statement generation?
What systems provide installment data?
How are missing attributes retrieved in AWS?
What are the rules for installment handling?
What data is required for cancellation records?
What are the preconditions for AWS data preloading?
How does AWS enrich iSeries data?
What happens when installment number is greater than 1?