POST, PUT, DELETE Limits
The kintone REST API has limits on the number of records that can be operated on at one time for GET, POST, PUT, and DELETE.
As of March 2024, they are as follows:
GET | 500 records/time |
POST | 100 records/time |
PUT | 100 records/time |
DELETE | 100 records/time |
Bulk Processing (bulkRequest) | 20 requests/time |
Since it is cumbersome to input kintone.api… every time you use the REST API and create error handling, I think there are not a few people who define generic functions or classes, but the problem that tends to occur at such times is reaching this limit.
Regarding GET records, there was wonderful code posted on the cybozu developer network, but there was nothing for POST, PUT, DELETE.
Reference: Retrieve All Records
This time, we will introduce a function that allows you to create records in bulk without worrying about the above record limits.
Although we will only introduce the POST pattern this time, it can also be adapted to PUT and DELETE by changing the API method.
Source Code
Easily Describe Repeated Single Requests
Here is the basic code. If the number of records you want to execute exceeds the limit, the API is executed in chunks of the limit from the beginning, and the remainder is executed at the end.
/** REST API endpoint */
const END_POINT = '/k/v1/';
/** Record limit for a single POST */
const LIMIT_POST = 100;
const postAllRecords = async (app, _records) => {
const records = [..._records];
// Execute in chunks of the limit until records are exhausted
while (records.length) {
await kintone.api(kintone.api.url(`${END_POINT}records`, true), 'POST', {
app: app,
records: records.slice(0, LIMIT_POST),
});
records.splice(0, LIMIT_POST);
}
};
When Using bulkRequest
In the pattern using only the POST API mentioned above, if an error occurs, it can only be rolled back in units of 100.
If you want to maximize the number of records that can be rolled back, use the following code that utilizes bulkRequest.
/** REST API endpoint */
const END_POINT = '/k/v1/';
/** Record limit for a single POST */
const LIMIT_POST = 100;
/** Request limit for simultaneous processing */
const LIMIT_BULK_REQUEST = 20;
const bulkRequest = async (_requests) => {
const requests = [..._requests];
const responses = [];
while (requests.length) {
responses.push(
await kintone.api(kintone.api.url(`${END_POINT}bulkRequest`, true), 'POST', {
requests: requests.slice(0, LIMIT_BULK_REQUEST),
})
);
requests.splice(0, LIMIT_BULK_REQUEST);
}
return responses;
};
const postAllRecords = async (app, _records) => {
const records = [..._records];
const payloads = [];
while (records.length) {
payloads.push({
app,
records: records.slice(0, LIMIT_POST),
});
records.splice(0, LIMIT_POST);
}
const requestBase = {
method: 'POST',
api: `${END_POINT}records.json`,
};
const requests = payloads.map((payload) => ({
...requestBase,
payload,
}));
return bulkRequest(requests);
};
The result of executing bulkRequest is an object, and the results of the normal API execution are stored in an array within the results.
To avoid making it obvious that bulkRequest is being used, it is necessary to flatten the return value.
return responses.map(({ results }) => results).flat();
For PUT and DELETE
Although we introduced the POST pattern in the sample, it can also be adapted to PUT and DELETE by changing the method.
One thing to note is that the content of _records, which corresponds to the sample code arguments, changes for POST, PUT, and DELETE.
In the case of POST, it was an array of kintone record data, but in the case of PUT, it becomes an array of objects containing the record number and kintone record as follows.
In the case of DELETE, it becomes an array of record numbers.
// PUT
_records = [
{ id: 1, record: kintoneRecord1 },
{ id: 2, record: kintoneRecord2 },
//...
];
// DELETE
_records = [
1,
2,
3,
4,
5, //...
];
Things to Note
Even if the process fails halfway, the data will not be rolled back.
For example, if you execute a process to register 5,000 records in bulk and an error occurs at the 3,000th record.
In the method utilizing bulkRequest,
POST request limit records × bulkRequest limit requests
If it is within 2,000 records, it will roll back even if it fails, but if an error occurs after 2,000 records, the first 2,000 records will not be deleted.
This cannot be solved with the current API, so to aim for more secure execution, it is necessary to check data consistency in advance.