Framework Overview
The Universal AIQ Framework provides a standardized 0-100 score for individual AI competency, built on five dimensions that form the SCOREs acronym:
Information
& Fluency
Evaluation
& Rigor
Deployment
& Impact
Innovation
& Contribution
Safety
& Responsibility
Score Bands
| Score | Level | What It Means |
|---|---|---|
| 0-20 | Unaware | No meaningful AI adoption. At risk of displacement. |
| 21-40 | User | Basic AI usage. Follows instructions. Needs supervision. |
| 41-60 | Practitioner | Daily productive use. Can evaluate quality. ~25% efficiency gain. |
| 61-80 | Builder | Deploys reliable systems. Creates measurable business value. |
| 81-95 | Architect | Advances practices. Mentors others. Trusted for critical work. |
| 96-100 | Pioneer | Industry-recognized contribution. Shapes how AI is used. |
Evidence Levels
| Level | Multiplier | Description |
|---|---|---|
| Level 1: Self | 0.70x | Self-report only. No validation. |
| Level 2: Peer | 0.85x | Peer or manager confirmed the work. |
| Level 3: Verified | 1.0x | Automated logs or full audit with evidence. |
Scoring Weight Tables
Your final score is calculated by weighting each SCOREs dimension based on three factors:
- Role — which skills matter most for your job function
- Company Type — organizational priorities that modify role weights
- Assessment Level — evidence confidence (you control this by getting peer validation or verified evidence)
Dual Scoring Weights
The framework uses two separate scores to distinguish what you know from what your organization has enabled:
"Am I ready to deliver AI value?"
Emphasizes Study + Copy (knowledge & evaluation skills)
Used for individual dashboards and comparisons.
"Has my org enabled AI delivery?"
Emphasizes Output + Ethical (deployment & governance)
Reveals organizational enablement gaps.
Personal Readiness Weights
| Role | Study | Copy | Output | Research | Ethical |
|---|
Corporate Impact Weights
| Role | Study | Copy | Output | Research | Ethical |
|---|
- Positive gap (Personal > Corporate): You have skills your org isn't utilizing → advocate for AI projects
- Negative gap (Corporate > Personal): Org is deploying faster than you're learning → invest in upskilling
- Balanced (within ±10): Skills match opportunities → continue current trajectory
Company Type Modifiers
Multipliers applied to role weights, then renormalized to 100%.
| Type | Study | Copy | Output | Research | Ethical | Philosophy |
|---|---|---|---|---|---|---|
| Startup | 1.0x | 0.7x | 1.4x | 1.2x | 0.7x | Ship it, learn, iterate |
| Enterprise | 1.0x | 1.2x | 0.85x | 1.0x | 1.0x | Reliable, scalable, governed |
| Aspirational | 0.85x | 0.85x | 1.0x | 1.2x | 1.2x | Build AI the right way |
Validation Matrix
What evidence validates each dimension at each assessment level.
| Dimension | Level 1 (Self) | Level 2 (Peer) | Level 3 (Verified) |
|---|---|---|---|
| S - Study | Self-reported sources | Peer confirms knowledge | Newsletter subs, course certs, reading logs |
| C - Copy | Self-reported methods | Peer reviews test cases | Eval scripts, benchmark results, CI logs |
| O - Output | Self-reported projects | Peer confirms usage | Git commits, deploy logs, usage metrics |
| R - Research | Self-reported contributions | Peer confirms novelty | Publications, patents, model weights |
| Es - Ethical security | Self-reported practices | Peer confirms safety | Audit logs, compliance records, training certs |
Full SCOREs Rubrics
S - Study (Information & Fluency)
Where do you learn about AI? Can you explain why things work or fail?
C - Copy (Evaluation & Rigor)
How do you know if AI output is good? Can you prove it?
O - Output (Deployment & Impact)
What have you built that others actually use? What value did it create?
R - Research (Innovation & Contribution)
Do you advance the field or just consume it?
Es - Ethical security (Safety & Responsibility)
Can you be trusted with AI? Do you use it safely?
For Administrators
Share pre-configured assessment links with your team to ensure everyone uses the same role and company settings.
Pre-configured Assessment Links
You can generate assessment URLs with settings pre-filled. When users open these links, their role, company type, and scoring distribution will be automatically selected.
Available URL Parameters
| Parameter | Values | Default | Description |
|---|---|---|---|
role |
General, Developer, Researcher, Support, Leader |
General |
Sets the primary role for dimension weighting |
companyType |
Startup, Enterprise, Aspirational |
(none) | Applies company-specific weight modifiers |
distribution |
bellCurve, linear, progressive, sigmoid |
bellCurve |
Point distribution curve (Advanced Options) |
Example URLs
Developer at a Startup:
https://sagearbor.github.io/ai-skill-eval-kit/level1.html?role=Developer&companyType=Startup
Leader at an Enterprise company:
https://sagearbor.github.io/ai-skill-eval-kit/level1.html?role=Leader&companyType=Enterprise
Researcher with progressive point distribution:
https://sagearbor.github.io/ai-skill-eval-kit/level1.html?role=Researcher&distribution=progressive
Using the Share Settings Button
On the assessment page, after selecting your desired settings:
- Configure the role, company type, and distribution as needed
- Click the "Share Settings" button (located above the assessment form)
- The URL with your current settings is copied to your clipboard
- Share this link with your team via email, Slack, or any messaging platform
Tip: Only non-default values are included in the URL to keep it clean. If you select "General" role with no company type and "bellCurve" distribution, the URL will be the plain assessment page.
Use Cases
- Team-wide assessments: Send a single link to all developers with
?role=Developer&companyType=Startup - Role-specific campaigns: Create separate links for different job functions
- Standardized scoring: Ensure consistent point distribution across your organization
- Onboarding: Include the assessment link in new hire materials with appropriate defaults