Bachir Soussi Chiadmi cefd1c2ad0 updated sys and created publi | 7 years ago | |
---|---|---|
.. | ||
dist | 7 years ago | |
lib | 7 years ago | |
scripts | 7 years ago | |
.tonic_example.js | 7 years ago | |
LICENSE | 7 years ago | |
README.md | 7 years ago | |
package.json | 7 years ago |
The fastest JSON Schema validator for node.js and browser. Supports v5 proposals.
Please note: You can start using NEW beta version 5.0.4 (see migration guide from 4.x.x) with the support of JSON-Schema draft-06 (not officially published yet): npm install ajv@^5.0.4-beta
.
Also see docs for 5.0.4.
Ajv generates code using doT templates to turn JSON schemas into super-fast validation functions that are efficient for v8 optimization.
Currently Ajv is the fastest and the most standard compliant validator according to these benchmarks:
Performace of different validators by json-schema-benchmark:
addSchema
or compiled to be available)type
keywordsswitch
, constant
, contains
, patternGroups
, patternRequired
, formatMaximum
/ formatMinimum
and formatExclusiveMaximum
/ formatExclusiveMinimum
from JSON-schema v5 proposals with option v5Currently Ajv is the only validator that passes all the tests from JSON Schema Test Suite (according to json-schema-benchmark, apart from the test that requires that 1.0
is not an integer that is impossible to satisfy in JavaScript).
npm install ajv
To install a stable beta version 5.0.4 (see migration guide from 4.x.x):
npm install ajv@^5.0.4-beta
Try it in the node REPL: https://tonicdev.com/npm/ajv
The fastest validation call:
var Ajv = require('ajv');
var ajv = new Ajv(); // options can be passed, e.g. {allErrors: true}
var validate = ajv.compile(schema);
var valid = validate(data);
if (!valid) console.log(validate.errors);
or with less code
// ...
var valid = ajv.validate(schema, data);
if (!valid) console.log(ajv.errors);
// ...
or
// ...
ajv.addSchema(schema, 'mySchema');
var valid = ajv.validate('mySchema', data);
if (!valid) console.log(ajv.errorsText());
// ...
See API and Options for more details.
Ajv compiles schemas to functions and caches them in all cases (using schema stringified with json-stable-stringify as a key), so that the next time the same schema is used (not necessarily the same object instance) it won't be compiled again.
The best performance is achieved when using compiled functions returned by compile
or getSchema
methods (there is no additional function call).
Please note: every time validation function or ajv.validate
are called errors
property is overwritten. You need to copy errors
array reference to another variable if you want to use it later (e.g., in the callback). See Validation errors
You can require Ajv directly from the code you browserify - in this case Ajv will be a part of your bundle.
If you need to use Ajv in several bundles you can create a separate UMD bundle using npm run bundle
script (thanks to siddo420).
Then you need to load Ajv in the browser:
<script src="ajv.min.js"></script>
This bundle can be used with different module systems or creates global Ajv
if no module system is found.
The browser bundle is available on cdnjs.
Ajv is tested with these browsers:
Please note: some frameworks, e.g. Dojo, may redefine global require in such way that is not compatible with CommonJS module format. In such case Ajv bundle has to be loaded before the framework and then you can use global Ajv (see issue #234).
CLI is available as a separate npm package ajv-cli. It supports:
Ajv supports all validation keywords from draft 4 of JSON-schema standard:
With option v5: true
Ajv also supports all validation keywords and $data reference from v5 proposals for JSON-schema standard:
required
but with patterns that some property should match.See JSON-Schema validation keywords for more details.
The following formats are supported for string validation with "format" keyword:
date
, time
and date-time
validate ranges in full
mode and only regexp in fast
mode (see options).There are two modes of format validation: fast
and full
. This mode affects formats date
, time
, date-time
, uri
, email
, and hostname
. See Options for details.
You can add additional formats and replace any of the formats above using addFormat method.
The option unknownFormats
allows to change the behaviour in case an unknown format is encountered - Ajv can either ignore them (default now) or fail schema compilation (will be the default in 5.0.0).
You can find patterns used for format validation and the sources that were used in formats.js.
With v5
option you can use values from the validated data as the values for the schema keywords. See v5 proposal for more information about how it works.
$data
reference is supported in the keywords: constant, enum, format, maximum/minimum, exclusiveMaximum / exclusiveMinimum, maxLength / minLength, maxItems / minItems, maxProperties / minProperties, formatMaximum / formatMinimum, formatExclusiveMaximum / formatExclusiveMinimum, multipleOf, pattern, required, uniqueItems.
The value of "$data" should be a JSON-pointer to the data (the root is always the top level data object, even if the $data reference is inside a referenced subschema) or a relative JSON-pointer (it is relative to the current point in data; if the $data reference is inside a referenced subschema it cannot point to the data outside of the root level for this subschema).
Examples.
This schema requires that the value in property smaller
is less or equal than the value in the property larger:
var schema = {
"properties": {
"smaller": {
"type": "number",
"maximum": { "$data": "1/larger" }
},
"larger": { "type": "number" }
}
};
var validData = {
smaller: 5,
larger: 7
};
This schema requires that the properties have the same format as their field names:
var schema = {
"additionalProperties": {
"type": "string",
"format": { "$data": "0#" }
}
};
var validData = {
'date-time': '1963-06-19T08:30:06.283185Z',
email: 'joe.bloggs@example.com'
}
$data
reference is resolved safely - it won't throw even if some property is undefined. If $data
resolves to undefined
the validation succeeds (with the exclusion of constant
keyword). If $data
resolves to incorrect type (e.g. not "number" for maximum keyword) the validation fails.
With v5 option and the package ajv-merge-patch you can use the keywords $merge
and $patch
that allow extending JSON-schemas with patches using formats JSON Merge Patch (RFC 7396) and JSON Patch (RFC 6902).
To add keywords $merge
and $patch
to Ajv instance use this code:
require('ajv-merge-patch')(ajv);
Examples.
Using $merge
:
{
"$merge": {
"source": {
"type": "object",
"properties": { "p": { "type": "string" } },
"additionalProperties": false
},
"with": {
"properties": { "q": { "type": "number" } }
}
}
}
Using $patch
:
{
"$patch": {
"source": {
"type": "object",
"properties": { "p": { "type": "string" } },
"additionalProperties": false
},
"with": [
{ "op": "add", "path": "/properties/q", "value": { "type": "number" } }
]
}
}
The schemas above are equivalent to this schema:
{
"type": "object",
"properties": {
"p": { "type": "string" },
"q": { "type": "number" }
},
"additionalProperties": false
}
The properties source
and with
in the keywords $merge
and $patch
can use absolute or relative $ref
to point to other schemas previously added to the Ajv instance or to the fragments of the current schema.
See the package ajv-merge-patch for more information.
The advantages of using custom keywords are:
modifying
option MUST be used in keyword definition) and/or create side effects while the data is being validatedIf a keyword is used only for side-effects and its validation result is pre-defined, use option valid: true/false
in keyword definition to simplify both generated code (no error handling in case of valid: true
) and your keyword functions (no need to return any validation result).
The concerns you have to be aware of when extending JSON-schema standard with custom keywords are the portability and understanding of your schemas. You will have to support these custom keywords on other platforms and to properly document these keywords so that everybody can understand them in your schemas.
You can define custom keywords with addKeyword method. Keywords are defined on the ajv
instance level - new instances will not have previously defined keywords.
Ajv allows defining keywords with:
Example. range
and exclusiveRange
keywords using compiled schema:
ajv.addKeyword('range', { type: 'number', compile: function (sch, parentSchema) {
var min = sch[0];
var max = sch[1];
return parentSchema.exclusiveRange === true
? function (data) { return data > min && data < max; }
: function (data) { return data >= min && data <= max; }
} });
var schema = { "range": [2, 4], "exclusiveRange": true };
var validate = ajv.compile(schema);
console.log(validate(2.01)); // true
console.log(validate(3.99)); // true
console.log(validate(2)); // false
console.log(validate(4)); // false
Several custom keywords (typeof, instanceof, range and propertyNames) are defined in ajv-keywords package - they can be used for your schemas and as a starting point for your own custom keywords.
See Defining custom keywords for more details.
During asynchronous compilation remote references are loaded using supplied function. See compileAsync
method and loadSchema
option.
Example:
var ajv = new Ajv({ loadSchema: loadSchema });
ajv.compileAsync(schema, function (err, validate) {
if (err) return;
var valid = validate(data);
});
function loadSchema(uri, callback) {
request.json(uri, function(err, res, body) {
if (err || res.statusCode >= 400)
callback(err || new Error('Loading error: ' + res.statusCode));
else
callback(null, body);
});
}
Please note: Option missingRefs
should NOT be set to "ignore"
or "fail"
for asynchronous compilation to work.
Example in node REPL: https://tonicdev.com/esp/ajv-asynchronous-validation
You can define custom formats and keywords that perform validation asyncronously by accessing database or some service. You should add async: true
in the keyword or format defnition (see addFormat, addKeyword and Defining custom keywords).
If your schema uses asynchronous formats/keywords or refers to some schema that contains them it should have "$async": true
keyword so that Ajv can compile it correctly. If asynchronous format/keyword or reference to asynchronous schema is used in the schema without $async
keyword Ajv will throw an exception during schema compilation.
Please note: all asynchronous subschemas that are referenced from the current or other schemas should have "$async": true
keyword as well, otherwise the schema compilation will fail.
Validation function for an asynchronous custom format/keyword should return a promise that resolves to true
or false
(or rejects with new Ajv.ValidationError(errors)
if you want to return custom errors from the keyword function). Ajv compiles asynchronous schemas to either generator function (default) that can be optionally transpiled with regenerator or to es7 async function that can be transpiled with nodent or with regenerator as well. You can also supply any other transpiler as a function. See Options.
The compiled validation function has $async: true
property (if the schema is asynchronous), so you can differentiate these functions if you are using both syncronous and asynchronous schemas.
If you are using generators, the compiled validation function can be either wrapped with co (default) or returned as generator function, that can be used directly, e.g. in koa 1.0. co
is a small library, it is included in Ajv (both as npm dependency and in the browser bundle).
Generator functions are currently supported in Chrome, Firefox and node.js (0.11+); if you are using Ajv in other browsers or in older versions of node.js you should use one of available transpiling options. All provided async modes use global Promise class. If your platform does not have Promise you should use a polyfill that defines it.
Validation result will be a promise that resolves to true
or rejects with an exception Ajv.ValidationError
that has the array of validation errors in errors
property.
Example:
/**
* without "async" and "transpile" options (or with option {async: true})
* Ajv will choose the first supported/installed option in this order:
* 1. native generator function wrapped with co
* 2. es7 async functions transpiled with nodent
* 3. es7 async functions transpiled with regenerator
*/
var ajv = new Ajv;
ajv.addKeyword('idExists', {
async: true,
type: 'number',
validate: checkIdExists
});
function checkIdExists(schema, data) {
return knex(schema.table)
.select('id')
.where('id', data)
.then(function (rows) {
return !!rows.length; // true if record is found
});
}
var schema = {
"$async": true,
"properties": {
"userId": {
"type": "integer",
"idExists": { "table": "users" }
},
"postId": {
"type": "integer",
"idExists": { "table": "posts" }
}
}
};
var validate = ajv.compile(schema);
validate({ userId: 1, postId: 19 }))
.then(function (valid) {
// "valid" is always true here
console.log('Data is valid');
})
.catch(function (err) {
if (!(err instanceof Ajv.ValidationError)) throw err;
// data is invalid
console.log('Validation errors:', err.errors);
});
To use a transpiler you should separately install it (or load its bundle in the browser).
Ajv npm package includes minified browser bundles of regenerator and nodent in dist folder.
var ajv = new Ajv({ /* async: 'es7', */ transpile: 'nodent' });
var validate = ajv.compile(schema); // transpiled es7 async function
validate(data).then(successFunc).catch(errorFunc);
npm install nodent
or use nodent.min.js
from dist folder of npm package.
var ajv = new Ajv({ /* async: 'es7', */ transpile: 'regenerator' });
var validate = ajv.compile(schema); // transpiled es7 async function
validate(data).then(successFunc).catch(errorFunc);
npm install regenerator
or use regenerator.min.js
from dist folder of npm package.
var ajv = new Ajv({ async: 'es7', transpile: transpileFunc });
var validate = ajv.compile(schema); // transpiled es7 async function
validate(data).then(successFunc).catch(errorFunc);
See Options.
mode | transpile speed* |
run-time speed* |
bundle size |
---|---|---|---|
generators (native) |
- | 1.0 | - |
es7.nodent | 1.35 | 1.1 | 183Kb |
es7.regenerator | 1.0 | 2.7 | 322Kb |
regenerator | 1.0 | 3.2 | 322Kb |
* Relative performance in node v.4, smaller is better.
nodent has several advantages:
regenerator is a more widely adopted alternative.
With option removeAdditional
(added by andyscott) you can filter data during the validation.
This option modifies original data.
Example:
var ajv = new Ajv({ removeAdditional: true });
var schema = {
"additionalProperties": false,
"properties": {
"foo": { "type": "number" },
"bar": {
"additionalProperties": { "type": "number" },
"properties": {
"baz": { "type": "string" }
}
}
}
}
var data = {
"foo": 0,
"additional1": 1, // will be removed; `additionalProperties` == false
"bar": {
"baz": "abc",
"additional2": 2 // will NOT be removed; `additionalProperties` != false
},
}
var validate = ajv.compile(schema);
console.log(validate(data)); // true
console.log(data); // { "foo": 0, "bar": { "baz": "abc", "additional2": 2 }
If removeAdditional
option in the example above were "all"
then both additional1
and additional2
properties would have been removed.
If the option were "failing"
then property additional1
would have been removed regardless of its value and property additional2
would have been removed only if its value were failing the schema in the inner additionalProperties
(so in the example above it would have stayed because it passes the schema, but any non-number would have been removed).
Please note: If you use removeAdditional
option with additionalProperties
keyword inside anyOf
/oneOf
keywords your validation can fail with this schema, for example:
{
"type": "object",
"oneOf": [
{
"properties": {
"foo": { "type": "string" }
},
"required": [ "foo" ],
"additionalProperties": false
},
{
"properties": {
"bar": { "type": "integer" }
},
"required": [ "bar" ],
"additionalProperties": false
}
]
}
The intention of the schema above is to allow objects with either the string property "foo" or the integer property "bar", but not with both and not with any other properties.
With the option removeAdditional: true
the validation will pass for the object { "foo": "abc"}
but will fail for the object {"bar": 1}
. It happens because while the first subschema in oneOf
is validated, the property bar
is removed because it is an additional property according to the standard (because it is not included in properties
keyword in the same schema).
While this behaviour is unexpected (issues #129, #134), it is correct. To have the expected behaviour (both objects are allowed and additional properties are removed) the schema has to be refactored in this way:
{
"type": "object",
"properties": {
"foo": { "type": "string" },
"bar": { "type": "integer" }
},
"additionalProperties": false,
"oneOf": [
{ "required": [ "foo" ] },
{ "required": [ "bar" ] }
]
}
The schema above is also more efficient - it will compile into a faster function.
With option useDefaults
Ajv will assign values from default
keyword in the schemas of properties
and items
(when it is the array of schemas) to the missing properties and items.
This option modifies original data.
Please note: by default the default value is inserted in the generated validation code as a literal (starting from v4.0), so the value inserted in the data will be the deep clone of the default in the schema.
If you need to insert the default value in the data by reference pass the option useDefaults: "shared"
.
Inserting defaults by reference can be faster (in case you have an object in default
) and it allows to have dynamic values in defaults, e.g. timestamp, without recompiling the schema. The side effect is that modifying the default value in any validated data instance will change the default in the schema and in other validated data instances. See example 3 below.
Example 1 (default
in properties
):
var ajv = new Ajv({ useDefaults: true });
var schema = {
"type": "object",
"properties": {
"foo": { "type": "number" },
"bar": { "type": "string", "default": "baz" }
},
"required": [ "foo", "bar" ]
};
var data = { "foo": 1 };
var validate = ajv.compile(schema);
console.log(validate(data)); // true
console.log(data); // { "foo": 1, "bar": "baz" }
Example 2 (default
in items
):
var schema = {
"type": "array",
"items": [
{ "type": "number" },
{ "type": "string", "default": "foo" }
]
}
var data = [ 1 ];
var validate = ajv.compile(schema);
console.log(validate(data)); // true
console.log(data); // [ 1, "foo" ]
Example 3 (inserting "defaults" by reference):
var ajv = new Ajv({ useDefaults: 'shared' });
var schema = {
properties: {
foo: {
default: { bar: 1 }
}
}
}
var validate = ajv.compile(schema);
var data = {};
console.log(validate(data)); // true
console.log(data); // { foo: { bar: 1 } }
data.foo.bar = 2;
var data2 = {};
console.log(validate(data2)); // true
console.log(data2); // { foo: { bar: 2 } }
default
keywords in other cases are ignored:
properties
or items
subschemasanyOf
, oneOf
and not
(see #42)if
subschema of v5 switch
keywordWhen you are validating user inputs all your data properties are usually strings. The option coerceTypes
allows you to have your data types coerced to the types specified in your schema type
keywords, both to pass the validation and to use the correctly typed data afterwards.
This option modifies original data.
Please note: if you pass a scalar value to the validating function its type will be coerced and it will pass the validation, but the value of the variable you pass won't be updated because scalars are passed by value.
Example 1:
var ajv = new Ajv({ coerceTypes: true });
var schema = {
"type": "object",
"properties": {
"foo": { "type": "number" },
"bar": { "type": "boolean" }
},
"required": [ "foo", "bar" ]
};
var data = { "foo": "1", "bar": "false" };
var validate = ajv.compile(schema);
console.log(validate(data)); // true
console.log(data); // { "foo": 1, "bar": false }
Example 2 (array coercions):
var ajv = new Ajv({ coerceTypes: 'array' });
var schema = {
"properties": {
"foo": { "type": "array", "items": { "type": "number" } },
"bar": { "type": "boolean" }
}
};
var data = { "foo": "1", "bar": ["false"] };
var validate = ajv.compile(schema);
console.log(validate(data)); // true
console.log(data); // { "foo": [1], "bar": false }
The coercion rules, as you can see from the example, are different from JavaScript both to validate user input as expected and to have the coercion reversible (to correctly validate cases where different types are defined in subschemas of "anyOf" and other compound keywords).
See Coercion rules for details.
Create Ajv instance.
All the instance methods below are bound to the instance, so they can be used without the instance.
Generate validating function and cache the compiled schema for future use.
Validating function returns boolean and has properties errors
with the errors from the last validation (null
if there were no errors) and schema
with the reference to the original schema.
Unless the option validateSchema
is false, the schema will be validated against meta-schema and if schema is invalid the error will be thrown. See options.
Asyncronous version of compile
method that loads missing remote schemas using asynchronous function in options.loadSchema
. Callback will always be called with 2 parameters: error (or null) and validating function. Error will be not null in the following cases:
loadSchema
calls callback with error).The function compiles schema and loads the first missing schema multiple times, until all missing schemas are loaded.
See example in Asynchronous compilation.
Validate data using passed schema (it will be compiled and cached).
Instead of the schema you can use the key that was previously passed to addSchema
, the schema id if it was present in the schema or any previously resolved reference.
Validation errors will be available in the errors
property of Ajv instance (null
if there were no errors).
Please note: every time this method is called the errors are overwritten so you need to copy them to another variable if you want to use them later.
If the schema is asynchronous (has $async
keyword on the top level) this method returns a Promise. See Asynchronous validation.
Add schema(s) to validator instance. This method does not compile schemas (but it still validates them). Because of that dependencies can be added in any order and circular dependencies are supported. It also prevents unnecessary compilation of schemas that are containers for other schemas but not used as a whole.
Array of schemas can be passed (schemas should have ids), the second parameter will be ignored.
Key can be passed that can be used to reference the schema and will be used as the schema id if there is no id inside the schema. If the key is not passed, the schema id will be used as the key.
Once the schema is added, it (and all the references inside it) can be referenced in other schemas and used to validate data.
Although addSchema
does not compile schemas, explicit compilation is not required - the schema will be compiled when it is used first time.
By default the schema is validated against meta-schema before it is added, and if the schema does not pass validation the exception is thrown. This behaviour is controlled by validateSchema
option.
Adds meta schema(s) that can be used to validate other schemas. That function should be used instead of addSchema
because there may be instance options that would compile a meta schema incorrectly (at the moment it is removeAdditional
option).
There is no need to explicitly add draft 4 meta schema (http://json-schema.org/draft-04/schema and http://json-schema.org/schema) - it is added by default, unless option meta
is set to false
. You only need to use it if you have a changed meta-schema that you want to use to validate your schemas. See validateSchema
.
With option v5: true
meta-schema that includes v5 keywords also added.
Validates schema. This method should be used to validate schemas rather than validate
due to the inconsistency of uri
format in JSON-Schema standard.
By default this method is called automatically when the schema is added, so you rarely need to use it directly.
If schema doesn't have $schema
property it is validated against draft 4 meta-schema (option meta
should not be false) or against v5 meta-schema if option v5
is true.
If schema has $schema
property then the schema with this id (that should be previously added) is used to validate passed schema.
Errors will be available at ajv.errors
.
Retrieve compiled schema previously added with addSchema
by the key passed to addSchema
or by its full reference (id). Returned validating function has schema
property with the reference to the original schema.
Remove added/cached schema. Even if schema is referenced by other schemas it can be safely removed as dependent schemas have local references.
Schema can be removed using:
addSchema
If no parameter is passed all schemas but meta-schemas will be removed and the cache will be cleared.
Add custom format to validate strings. It can also be used to replace pre-defined formats for Ajv instance.
Strings are converted to RegExp.
Function should return validation result as true
or false
.
If object is passed it should have properties validate
, compare
and async
:
formatMaximum
/formatMinimum
(from v5 proposals - v5
option should be used). It should return 1
if the first value is bigger than the second value, -1
if it is smaller and 0
if it is equal.true
value if validate
is an asynchronous function; in this case it should return a promise that resolves with a value true
or false
.Custom formats can be also added via formats
option.
Add custom validation keyword to Ajv instance.
Keyword should be different from all standard JSON schema keywords and different from previously defined keywords. There is no way to redefine keywords or to remove keyword definition from the instance.
Keyword must start with a letter, _
or $
, and may continue with letters, numbers, _
, $
, or -
.
It is recommended to use an application-specific prefix for keywords to avoid current and future name collisions.
Example Keywords:
"xyz-example"
: valid, and uses prefix for the xyz project to avoid name collisions."example"
: valid, but not recommended as it could collide with future versions of JSON schema etc."3-example"
: invalid as numbers are not allowed to be the first character in a keywordKeyword definition is an object with the following properties:
false
value used with "validate" keyword to not pass schematrue
MUST be passed if keyword modifies datatrue
/false
to pre-define validation result, the result returned from validation function will be ignored. This option cannot be used with macro keywords.true
value to support $data reference as the value of custom keyword. The reference will be resolved at validation time. If the keyword has meta-schema it would be extended to allow $data and it will be used to validate the resolved value. Supporting $data reference requires that keyword has validating function (as the only option or in addition to compile, macro or inline function).true
value if the validation function is asynchronous (whether it is compiled or passed in validate property); in this case it should return a promise that resolves with a value true
or false
. This option is ignored in case of "macro" and "inline" keywords.compile, macro and inline are mutually exclusive, only one should be used at a time. validate can be used separately or in addition to them to support $data reference.
Please note: If the keyword is validating data type that is different from the type(s) in its definition, the validation function will not be called (and expanded macro will not be used), so there is no need to check for data type inside validation function or inside schema returned by macro function (unless you want to enforce a specific type and for some reason do not want to use a separate type
keyword for that). In the same way as standard keywords work, if the keyword does not apply to the data type being validated, the validation of this keyword will succeed.
See Defining custom keywords for more details.
Returns custom keyword definition, true
for pre-defined keywords and false
if the keyword is unknown.
Removes custom or pre-defined keyword so you can redefine them.
While this method can be used to extend pre-defined keywords, it can also be used to completely change their meaning - it may lead to unexpected results.
Please note: schemas compiled before the keyword is removed will continue to work without changes. To recompile schemas use removeSchema
method and compile them again.
Returns the text with all errors in a String.
Options can have properties separator
(string used to separate errors, ", " by default) and dataVar
(the variable name that dataPaths are prefixed with, "data" by default).
Defaults:
{
// validation and reporting options:
v5: false,
allErrors: false,
verbose: false,
jsonPointers: false,
uniqueItems: true,
unicode: true,
format: 'fast',
formats: {},
unknownFormats: 'ignore',
schemas: {},
// referenced schema options:
missingRefs: true,
extendRefs: true,
loadSchema: undefined, // function(uri, cb) { /* ... */ cb(err, schema); },
// options to modify validated data:
removeAdditional: false,
useDefaults: false,
coerceTypes: false,
// asynchronous validation options:
async: undefined,
transpile: undefined,
// advanced options:
meta: true,
validateSchema: true,
addUsedSchema: true,
inlineRefs: true,
passContext: false,
loopRequired: Infinity,
ownProperties: false,
multipleOfPrecision: false,
errorDataPath: 'object',
sourceCode: true,
messages: true,
beautify: false,
cache: new Cache
}
switch
, constant
, contains
, patternGroups
, patternRequired
, formatMaximum
/ formatMinimum
and formatExclusiveMaximum
/ formatExclusiveMinimum
from JSON-schema v5 proposals. With this option added schemas without $schema
property are validated against v5 meta-schema. false
by default.schema
and parentSchema
) and validated data in errors (false by default).dataPath
propery of errors using JSON Pointers instead of JavaScript property access notation.uniqueItems
keyword (true by default).false
to use .length
of strings that is faster, but gives "incorrect" lengths of strings with unicode pairs - each unicode pair is counted as two characters.false
not to validate formats at all. E.g., 25:00:00 and 2015/14/33 will be invalid time and date in 'full' mode but it will be valid in 'fast' mode.addFormat
method.true
(will be default in 5.0.0) - if the unknown format is encountered the exception is thrown during schema compilation. If format
keyword value is v5 $data reference and it is unknown the validation will fail.[String]
- an array of unknown format names that will be ignored. This option can be used to allow usage of third party schemas with format(s) for which you don't have definitions, but still fail if some other unknown format is used. If format
keyword value is v5 $data reference and it is not in this array the validation will fail."ignore"
(default now) - to log warning during schema compilation and always pass validation. This option is not recommended, as it allows to mistype format name. This behaviour is required by JSON-schema specification.addSchema(value, key)
will be called for each schema in this object.true
(default) - if the reference cannot be resolved during compilation the exception is thrown. The thrown error has properties missingRef
(with hash fragment) and missingSchema
(without it). Both properties are resolved relative to the current base id (usually schema id, unless it was substituted)."ignore"
- to log error during compilation and always pass validation."fail"
- to log error and successfully compile schema but fail validation if this rule is checked.$ref
is present in the schema. Option values:
true
(default) - validate all keywords in the schemas with $ref
."ignore"
- when $ref
is used other keywords are ignored (as per JSON Reference standard). A warning will be logged during the schema compilation."fail"
- if other validation keywords are used together with $ref
the exception will be throw when the schema is compiled.compileAsync
is used and some reference is missing (option missingRefs
should NOT be 'fail' or 'ignore'). This function should accept 2 parameters: remote schema uri and node-style callback. See example in Asynchronous compilation.addMetaSchema
method. Option values:
false
(default) - not to remove additional properties"all"
- all additional properties are removed, regardless of additionalProperties
keyword in schema (and no validation is made for them).true
- only additional properties with additionalProperties
keyword equal to false
are removed."failing"
- additional properties that fail schema validation will be removed (where additionalProperties
keyword is false
or schema).default
keywords. Default behaviour is to ignore default
keywords. This option is not used if schema is added with addMetaSchema
method. See examples in Assigning defaults. Option values:
false
(default) - do not use defaultstrue
- insert defaults by value (safer and slower, object literal is used)."shared"
- insert defaults by reference (faster). If the default is an object, it will be shared by all instances of validated data. If you modify the inserted default in the validated data, it will be modified in the schema as well.type
keyword. See the example in Coercing data types and coercion rules. Option values:
false
(default) - no type coercion.true
- coerce scalar data types."array"
- in addition to coercions between scalar types, coerce scalar data to an array with one element and vice versa (as required by the schema)."*"
/ "co*"
- compile to generator function ("co*" - wrapped with co.wrap
). If generators are not supported and you don't provide transpile
option, the exception will be thrown when Ajv instance is created."es7"
- compile to es7 async function. Unless your platform supports them you need to provide transpile
option. Currently only MS Edge 13 with flag supports es7 async functions according to compatibility table).true
- if transpile option is not passed Ajv will choose the first supported/installed async/transpile modes in this order: "co" (native generator with co.wrap), "es7"/"nodent", "co"/"regenerator" during the creation of the Ajv instance. If none of the options is available the exception will be thrown.undefined
- Ajv will choose the first available async mode in the same way as with true
option but when the first asynchronous schema is compiled."nodent"
- transpile with nodent. If nodent is not installed, the exception will be thrown. nodent can only transpile es7 async functions; it will enforce this mode."regenerator"
- transpile with regenerator. If regenerator is not installed, the exception will be thrown.v5: true
v5 meta-schema will be added as well. If an object is passed, it will be used as the default meta-schema for schemas that have no $schema
keyword. This default meta-schema MUST have $schema
keyword.$schema
property in the schema can either be http://json-schema.org/schema or http://json-schema.org/draft-04/schema or absent (draft-4 meta-schema will be used) or can be a reference to the schema previously added with addMetaSchema
method. Option values:
true
(default) - if the validation fails, throw the exception."log"
- if the validation fails, log error.false
- skip schema validation.compile
and validate
add schemas to the instance if they have id
property that doesn't start with "#". If id
is present and it is not unique the exception will be thrown. Set this option to false
to skip adding schemas to the instance and the id
uniqueness check when these methods are used. This option does not affect addSchema
method.true
(default) - the referenced schemas that don't have refs in them are inlined, regardless of their size - that substantially improves performance at the cost of the bigger size of compiled schema functions.false
- to not inline referenced schemas (they will be compiled as separate functions).true
and you pass some context to the compiled validation function with validate.call(context, data)
, the context
will be available as this
in your custom keywords. By default this
is Ajv instance.required
keyword is compiled into a single expression (or a sequence of statements in allErrors
mode). In case of a very large number of properties in this keyword it may result in a very big validation function. Pass integer to set the number of properties above which required
keyword will be validated in a loop - smaller validation function size but also worse performance.true
only own enumerable object properties (i.e. found directly on the object rather than on its prototype) are iterated. Contributed by @mbroadst.multipleOf
keyword is validated by comparing the result of division with parseInt() of that result. It works for dividers that are bigger than 1. For small dividers such as 0.01 the result of the division is usually not integer (even when it should be integer, see issue #84). If you need to use fractional dividers set this option to some positive integer N to have multipleOf
validated using this formula: Math.abs(Math.round(division) - division) < 1e-N
(it is slower but allows for float arithmetics deviations).dataPath
to point to 'object' (default) or to 'property' when validating keywords required
, additionalProperties
and dependencies
.sourceCode
property to validating function (for debugging; this code can be different from the result of toString call).true
by default. false
can be passed when custom messages are used (e.g. with ajv-i18n).npm install js-beautify
to use this option. true
or js-beautify options can be passed.put(key, value)
, get(key)
, del(key)
and clear()
.In case of validation failure Ajv assigns the array of errors to .errors
property of validation function (or to .errors
property of Ajv instance in case validate
or validateSchema
methods were called). In case of asynchronous validation the returned promise is rejected with the exception of the class Ajv.ValidationError
that has .errors
poperty.
Each error is an object with the following properties:
dataPath
uses JavaScript property access notation (e.g., ".prop[1].subProp"
). When the option jsonPointers
is true (see Options) dataPath
will be set using JSON pointer standard (e.g., "/prop/1/subProp"
).messages
set to false).verbose
option).verbose
option)verbose
option).Properties of params
object in errors depend on the keyword that failed validation.
maxItems
, minItems
, maxLength
, minLength
, maxProperties
, minProperties
- property limit
(number, the schema of the keyword).additionalItems
- property limit
(the maximum number of allowed items in case when items
keyword is an array of schemas and additionalItems
is false).additionalProperties
- property additionalProperty
(the property not used in properties
and patternProperties
keywords).patternGroups
(with v5 option) - properties:
pattern
reason
("minimum"/"maximum"),limit
(max/min allowed number of properties matching number)dependencies
- properties:
property
(dependent property),missingProperty
(required missing dependency - only the first one is reported currently)deps
(required dependencies, comma separated list as a string),depsCount
(the number of required dependedncies).format
- property format
(the schema of the keyword).maximum
, minimum
- properties:
limit
(number, the schema of the keyword),exclusive
(boolean, the schema of exclusiveMaximum
or exclusiveMinimum
),comparison
(string, comparison operation to compare the data to the limit, with the data on the left and the limit on the right; can be "<", "<=", ">", ">=")multipleOf
- property multipleOf
(the schema of the keyword)pattern
- property pattern
(the schema of the keyword)required
- property missingProperty
(required property that is missing).patternRequired
(with v5 option) - property missingPattern
(required pattern that did not match any property).type
- property type
(required type(s), a string, can be a comma-separated list)uniqueItems
- properties i
and j
(indices of duplicate items).enum
- property allowedValues
pointing to the array of values (the schema of the keyword).$ref
- property ref
with the referenced schema URI.keyword
(the keyword name).npm install
git submodule update --init
npm test
All validation functions are generated using doT templates in dot folder. Templates are precompiled so doT is not a run-time dependency.
npm run build
- compiles templates to dotjs folder.
npm run watch
- automatically compiles templates when files in dot folder change
Please see Contributing guidelines
See https://github.com/epoberezkin/ajv/releases
Please note: Changes in version 5.0.1-beta.