CORE is an automated means for an independent third party to evaluate and score basic operational functions within financial institutions. It is used for internal operational compliance reviews, and to prepare for CFPB, NMLS, OCC, and other regulatory audits.
The scoring output provides users with an overall performance benchmark relative to an industry standard and helps to identify specific operational strengths and weaknesses.
The overall score reflects the logic in the scoring engine. It is not intended to replace human expertise or extenuating market factors.
CORE’s Intelligent Workflow Design
CORE is designed to intelligently guide users through the process of gathering participant review information. Reviewers ask questions covering certain Functions within Operational Categories and record responses in the CORE system. Answers to questions open additional related questions in a decision tree format.
Input answers are read by the CORE automated scoring engine. Override capabilities exist where appropriate. The scoring engine generates scores automatically from the data provided, in two phases.
First, scores are generated for each Function based on the responses. Once the Function scores have been generated the system will use a weighting process to calculate comparative Functional level scores.
Second, Functional scores are tallied within each Category. CORE then again uses a weighting process to generate the Category scores.
The scoring mechanism provides a standardized means of comparing changes from period to period, and across counterparties or service locations. Questions are designed and weighted to best discern comparative quality, strengths and weaknesses, and areas on which to focus improvement efforts.
CORE contains seven (7) Operational Categories as follows:
- Procedures and Controls
- Regulatory and Compliance
- Loan Administration
- Collections; and
Within these seven categories there are 38 Functions identified as being critical in assessing certain financial institution.
Amongst the 38 Functions are approximately 400 tests that can have on average between 2 or 3 criteria for answers to generate a score.
Logic has been coded to generate an overall participant score based on the combination of answers.
Scoring on all levels is defined as follows:
- Excellent – C1 (exceeds industry benchmark);
- Good Without Conditions – C2 (meets industry benchmark);
- Satisfactory With Conditions – C3 (Could meet benchmark); and
- Unsatisfactory – C4 (not possible to meet benchmark without dramatic changes).