fix: resolve TypeScript errors in frontend build

This commit is contained in:
Hiro
2026-03-30 23:16:07 +00:00
parent b733306773
commit 24925e1acb
2941 changed files with 418042 additions and 49 deletions

147
node_modules/prosemirror-changeset/CHANGELOG.md generated vendored Normal file
View File

@@ -0,0 +1,147 @@
## 2.4.0 (2026-02-14)
### New features
`Change` objects can now be serialized to and deserialized from JSON, and `ChangeSet.create` allows you to pass in a set of changes.
## 2.3.1 (2025-05-28)
### Bug fixes
Improve diffing to not treat closing tokens of different node types as the same token.
## 2.3.0 (2025-05-05)
### New features
Change sets can now be passed a custom token encoder that controls the way changed content is diffed.
## 2.2.1 (2023-05-17)
### Bug fixes
Include CommonJS type declarations in the package to please new TypeScript resolution settings.
## 2.2.0 (2022-05-30)
### New features
Include TypeScript type declarations.
## 2.1.2 (2019-11-20)
### Bug fixes
Rename ES module files to use a .js extension, since Webpack gets confused by .mjs
## 2.1.1 (2019-11-19)
### Bug fixes
The file referred to in the package's `module` field now is compiled down to ES5.
## 2.1.0 (2019-11-08)
### New features
Add a `module` field to package json file.
## 2.0.4 (2019-03-12)
### Bug fixes
Fixes an issue where steps that cause multiple changed ranges (such as `ReplaceAroundStep`) would cause invalid change sets.
Fix a bug in incremental change set updates that would cause incorrect results in a number of cases.
## 2.0.3 (2019-01-09)
### Bug fixes
Make `simplifyChanges` merge adjacent simplified changes (which can occur when a word boundary is inserted).
## 2.0.2 (2019-01-08)
### Bug fixes
Fix a bug in simplifyChanges that only occurred when the changes weren't at the start of the document.
## 2.0.1 (2019-01-07)
### Bug fixes
Fixes issue in `simplifyChanges` where it sometimes returned nonsense when the inspected text wasn't at the start of the document.
## 2.0.0 (2019-01-04)
### Bug fixes
Solves various cases where complicated edits could corrupt the set of changes due to the mapped positions of deletions not agreeing with the mapped positions of insertions.
### New features
Moves to a more efficient diffing algorithm (Meyers), so that large replacements can be accurately diffed using reasonable time and memory.
You can now find the original document given to a `ChangeSet` with its `startDoc` accessor.
### Breaking changes
The way change data is stored in `ChangeSet` objects works differently in this version. Instead of keeping deletions and insertions in separate arrays, the object holds an array of changes, which cover all the changed regions between the old and new document. Each change has start and end positions in both the old and the new document, and contains arrays of insertions and deletions within it.
This representation avoids a bunch of consistency problems that existed in the old approach, where keeping positions coherent in the deletion and insertion arrays was hard.
This means the `deletions` and `insertions` members in `ChangeSet` are gone, and instead there is a `changes` property which holds an array of `Change` objects. Each of these has `fromA` and `toA` properties indicating its extent in the old document, and `fromB` and `toB` properties pointing into the new document. Actual insertions and deletions are stored in `inserted` and `deleted` arrays in `Change` objects, which hold `{data, length}` objects, where the total length of deletions adds up to `toA - fromA`, and the total length of insertions to `toB - fromB`.
When creating a `ChangeSet` object, you no longer need to pass separate compare and combine callbacks. Instead, these are now represented using a single function that returns a combined data value or `null` when the values are not compatible.
## 1.2.1 (2018-11-15)
### Bug fixes
Properly apply the heuristics for ignoring short matches when diffing, and adjust those heuristics to more agressively weed out tiny matches in large changes.
## 1.2.0 (2018-11-08)
### New features
The new `changedRange` method can be used to compare two change sets and find out which range has changed.
## 1.1.0 (2018-11-07)
### New features
Add a new method, `ChangeSet.map` to update the data associated with changed ranges.
## 1.0.5 (2018-09-25)
### Bug fixes
Fix another issue where overlapping changes that can't be merged could produce a corrupt change set.
## 1.0.4 (2018-09-24)
### Bug fixes
Fixes an issue where `addSteps` could produce invalid change sets when a new step's deleted range overlapped with an incompatible previous deletion.
## 1.0.3 (2017-11-10)
### Bug fixes
Fix issue where deleting, inserting, and deleting the same content would lead to an inconsistent change set.
## 1.0.2 (2017-10-19)
### Bug fixes
Fix a bug that caused `addSteps` to break when merging two insertions into a single deletion.
## 1.0.1 (2017-10-18)
### Bug fixes
Fix crash in `ChangeSet.addSteps`.
## 1.0.0 (2017-10-13)
First stable release.

19
node_modules/prosemirror-changeset/LICENSE generated vendored Normal file
View File

@@ -0,0 +1,19 @@
Copyright (C) 2017 by Marijn Haverbeke <marijn@haverbeke.berlin> and others
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in
all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
THE SOFTWARE.

175
node_modules/prosemirror-changeset/README.md generated vendored Normal file
View File

@@ -0,0 +1,175 @@
# prosemirror-changeset
This is a helper module that can turn a sequence of document changes
into a set of insertions and deletions, for example to display them in
a change-tracking interface. Such a set can be built up incrementally,
in order to do such change tracking in a halfway performant way during
live editing.
This code is licensed under an [MIT
licence](https://github.com/ProseMirror/prosemirror-changeset/blob/master/LICENSE).
## Programming interface
Insertions and deletions are represented as spans—ranges in the
document. The deleted spans refer to the original document, whereas
the inserted ones point into the current document.
It is possible to associate arbitrary data values with such spans, for
example to track the user that made the change, the timestamp at which
it was made, or the step data necessary to invert it again.
### class Change`<Data = any>`
A replaced range with metadata associated with it.
* **`fromA`**`: number`\
The start of the range deleted/replaced in the old document.
* **`toA`**`: number`\
The end of the range in the old document.
* **`fromB`**`: number`\
The start of the range inserted in the new document.
* **`toB`**`: number`\
The end of the range in the new document.
* **`deleted`**`: readonly Span[]`\
Data associated with the deleted content. The length of these
spans adds up to `this.toA - this.fromA`.
* **`inserted`**`: readonly Span[]`\
Data associated with the inserted content. Length adds up to
`this.toB - this.fromB`.
* **`toJSON`**`() → ChangeJSON`\
Returns a JSON-serializeable object to represent this change.
* `static `**`merge`**`<Data>(x: readonly Change[], y: readonly Change[], combine: fn(dataA: Data, dataB: Data) → Data) → readonly Change[]`\
This merges two changesets (the end document of x should be the
start document of y) into a single one spanning the start of x to
the end of y.
* `static `**`fromJSON`**`<Data>(json: ChangeJSON) → Change`\
Deserialize a change from JSON format.
### class Span`<Data = any>`
Stores metadata for a part of a change.
* **`length`**`: number`\
The length of this span.
* **`data`**`: Data`\
The data associated with this span.
### class ChangeSet`<Data = any>`
A change set tracks the changes to a document from a given point
in the past. It condenses a number of step maps down to a flat
sequence of replacements, and simplifies replacments that
partially undo themselves by comparing their content.
* **`changes`**`: readonly Change[]`\
Replaced regions.
* **`addSteps`**`(newDoc: Node, maps: readonly StepMap[], data: Data | readonly Data[]) → ChangeSet`\
Computes a new changeset by adding the given step maps and
metadata (either as an array, per-map, or as a single value to be
associated with all maps) to the current set. Will not mutate the
old set.
Note that due to simplification that happens after each add,
incrementally adding steps might create a different final set
than adding all those changes at once, since different document
tokens might be matched during simplification depending on the
boundaries of the current changed ranges.
* **`startDoc`**`: Node`\
The starting document of the change set.
* **`map`**`(f: fn(range: Span) → Data) → ChangeSet`\
Map the span's data values in the given set through a function
and construct a new set with the resulting data.
* **`changedRange`**`(b: ChangeSet, maps?: readonly StepMap[]) → {from: number, to: number}`\
Compare two changesets and return the range in which they are
changed, if any. If the document changed between the maps, pass
the maps for the steps that changed it as second argument, and
make sure the method is called on the old set and passed the new
set. The returned positions will be in new document coordinates.
* `static `**`create`**`<Data = any>(doc: Node, combine?: fn(dataA: Data, dataB: Data) → Data = (a, b) => a === b ? a : null as any, tokenEncoder?: TokenEncoder = DefaultEncoder, changes?: readonly Change[] = []) → ChangeSet`\
Create a changeset with the given base object and configuration.
The `combine` function is used to compare and combine metadata—it
should return null when metadata isn't compatible, and a combined
version for a merged range when it is.
When given, a token encoder determines how document tokens are
serialized and compared when diffing the content produced by
changes. The default is to just compare nodes by name and text
by character, ignoring marks and attributes.
To serialize a change set, you can store its document and
change array as JSON, and then pass the deserialized (via
[`Change.fromJSON`](#changes.Change^fromJSON)) set of changes
as fourth argument to `create` to recreate the set.
* **`simplifyChanges`**`(changes: readonly Change[], doc: Node) → Change[]`\
Simplifies a set of changes for presentation. This makes the
assumption that having both insertions and deletions within a word
is confusing, and, when such changes occur without a word boundary
between them, they should be expanded to cover the entire set of
words (in the new document) they touch. An exception is made for
single-character replacements.
### interface TokenEncoder`<T>`
A token encoder can be passed when creating a `ChangeSet` in order
to influence the way the library runs its diffing algorithm. The
encoder determines how document tokens (such as nodes and
characters) are encoded and compared.
Note that both the encoding and the comparison may run a lot, and
doing non-trivial work in these functions could impact
performance.
* **`encodeCharacter`**`(char: number, marks: readonly Mark[]) → T`\
Encode a given character, with the given marks applied.
* **`encodeNodeStart`**`(node: Node) → T`\
Encode the start of a node or, if this is a leaf node, the
entire node.
* **`encodeNodeEnd`**`(node: Node) → T`\
Encode the end token for the given node. It is valid to encode
every end token in the same way.
* **`compareTokens`**`(a: T, b: T) → boolean`\
Compare the given tokens. Should return true when they count as
equal.
### type ChangeJSON`<Data>`
JSON-serialized form of a change.
* **`fromA`**`: number`
* **`toA`**`: number`
* **`fromB`**`: number`
* **`toB`**`: number`
* **`deleted`**`: readonly {length: number, data: Data}[]`
* **`inserted`**`: readonly {length: number, data: Data}[]`

585
node_modules/prosemirror-changeset/dist/index.cjs generated vendored Normal file
View File

@@ -0,0 +1,585 @@
'use strict';
function _toConsumableArray(arr) { return _arrayWithoutHoles(arr) || _iterableToArray(arr) || _unsupportedIterableToArray(arr) || _nonIterableSpread(); }
function _nonIterableSpread() { throw new TypeError("Invalid attempt to spread non-iterable instance.\nIn order to be iterable, non-array objects must have a [Symbol.iterator]() method."); }
function _unsupportedIterableToArray(o, minLen) { if (!o) return; if (typeof o === "string") return _arrayLikeToArray(o, minLen); var n = Object.prototype.toString.call(o).slice(8, -1); if (n === "Object" && o.constructor) n = o.constructor.name; if (n === "Map" || n === "Set") return Array.from(o); if (n === "Arguments" || /^(?:Ui|I)nt(?:8|16|32)(?:Clamped)?Array$/.test(n)) return _arrayLikeToArray(o, minLen); }
function _iterableToArray(iter) { if (typeof Symbol !== "undefined" && iter[Symbol.iterator] != null || iter["@@iterator"] != null) return Array.from(iter); }
function _arrayWithoutHoles(arr) { if (Array.isArray(arr)) return _arrayLikeToArray(arr); }
function _arrayLikeToArray(arr, len) { if (len == null || len > arr.length) len = arr.length; for (var i = 0, arr2 = new Array(len); i < len; i++) arr2[i] = arr[i]; return arr2; }
function _typeof(o) { "@babel/helpers - typeof"; return _typeof = "function" == typeof Symbol && "symbol" == typeof Symbol.iterator ? function (o) { return typeof o; } : function (o) { return o && "function" == typeof Symbol && o.constructor === Symbol && o !== Symbol.prototype ? "symbol" : typeof o; }, _typeof(o); }
function _classCallCheck(instance, Constructor) { if (!(instance instanceof Constructor)) { throw new TypeError("Cannot call a class as a function"); } }
function _defineProperties(target, props) { for (var i = 0; i < props.length; i++) { var descriptor = props[i]; descriptor.enumerable = descriptor.enumerable || false; descriptor.configurable = true; if ("value" in descriptor) descriptor.writable = true; Object.defineProperty(target, _toPropertyKey(descriptor.key), descriptor); } }
function _createClass(Constructor, protoProps, staticProps) { if (protoProps) _defineProperties(Constructor.prototype, protoProps); if (staticProps) _defineProperties(Constructor, staticProps); Object.defineProperty(Constructor, "prototype", { writable: false }); return Constructor; }
function _toPropertyKey(arg) { var key = _toPrimitive(arg, "string"); return _typeof(key) === "symbol" ? key : String(key); }
function _toPrimitive(input, hint) { if (_typeof(input) !== "object" || input === null) return input; var prim = input[Symbol.toPrimitive]; if (prim !== undefined) { var res = prim.call(input, hint || "default"); if (_typeof(res) !== "object") return res; throw new TypeError("@@toPrimitive must return a primitive value."); } return (hint === "string" ? String : Number)(input); }
function typeID(type) {
var cache = type.schema.cached.changeSetIDs || (type.schema.cached.changeSetIDs = Object.create(null));
var id = cache[type.name];
if (id == null) cache[type.name] = id = Object.keys(type.schema.nodes).indexOf(type.name) + 1;
return id;
}
var DefaultEncoder = {
encodeCharacter: function encodeCharacter(_char) {
return _char;
},
encodeNodeStart: function encodeNodeStart(node) {
return node.type.name;
},
encodeNodeEnd: function encodeNodeEnd(node) {
return -typeID(node.type);
},
compareTokens: function compareTokens(a, b) {
return a === b;
}
};
function tokens(frag, encoder, start, end, target) {
for (var i = 0, off = 0; i < frag.childCount; i++) {
var child = frag.child(i),
endOff = off + child.nodeSize;
var from = Math.max(off, start),
to = Math.min(endOff, end);
if (from < to) {
if (child.isText) {
for (var j = from; j < to; j++) target.push(encoder.encodeCharacter(child.text.charCodeAt(j - off), child.marks));
} else if (child.isLeaf) {
target.push(encoder.encodeNodeStart(child));
} else {
if (from == off) target.push(encoder.encodeNodeStart(child));
tokens(child.content, encoder, Math.max(off + 1, from) - off - 1, Math.min(endOff - 1, to) - off - 1, target);
if (to == endOff) target.push(encoder.encodeNodeEnd(child));
}
}
off = endOff;
}
return target;
}
var MAX_DIFF_SIZE = 5000;
function minUnchanged(sizeA, sizeB) {
return Math.min(15, Math.max(2, Math.floor(Math.max(sizeA, sizeB) / 10)));
}
function computeDiff(fragA, fragB, range) {
var encoder = arguments.length > 3 && arguments[3] !== undefined ? arguments[3] : DefaultEncoder;
var tokA = tokens(fragA, encoder, range.fromA, range.toA, []);
var tokB = tokens(fragB, encoder, range.fromB, range.toB, []);
var start = 0,
endA = tokA.length,
endB = tokB.length;
var cmp = encoder.compareTokens;
while (start < tokA.length && start < tokB.length && cmp(tokA[start], tokB[start])) start++;
if (start == tokA.length && start == tokB.length) return [];
while (endA > start && endB > start && cmp(tokA[endA - 1], tokB[endB - 1])) endA--, endB--;
if (endA == start || endB == start || endA == endB && endA == start + 1) return [range.slice(start, endA, start, endB)];
var lenA = endA - start,
lenB = endB - start;
var max = Math.min(MAX_DIFF_SIZE, lenA + lenB),
off = max + 1;
var history = [];
var frontier = [];
for (var len = off * 2, i = 0; i < len; i++) frontier[i] = -1;
for (var size = 0; size <= max; size++) {
var _loop = function _loop(_diag) {
var next = frontier[_diag + 1 + max],
prev = frontier[_diag - 1 + max];
var x = next < prev ? prev : next + 1,
y = x + _diag;
while (x < lenA && y < lenB && cmp(tokA[start + x], tokB[start + y])) x++, y++;
frontier[_diag + max] = x;
if (x >= lenA && y >= lenB) {
var diff = [],
minSpan = minUnchanged(endA - start, endB - start);
var fromA = -1,
toA = -1,
fromB = -1,
toB = -1;
var add = function add(fA, tA, fB, tB) {
if (fromA > -1 && fromA < tA + minSpan) {
fromA = fA;
fromB = fB;
} else {
if (fromA > -1) diff.push(range.slice(fromA, toA, fromB, toB));
fromA = fA;
toA = tA;
fromB = fB;
toB = tB;
}
};
for (var _i = size - 1; _i >= 0; _i--) {
var _next = frontier[_diag + 1 + max],
_prev = frontier[_diag - 1 + max];
if (_next < _prev) {
_diag--;
x = _prev + start;
y = x + _diag;
add(x, x, y, y + 1);
} else {
_diag++;
x = _next + start;
y = x + _diag;
add(x, x + 1, y, y);
}
frontier = history[_i >> 1];
}
if (fromA > -1) diff.push(range.slice(fromA, toA, fromB, toB));
return {
v: diff.reverse()
};
}
diag = _diag;
},
_ret;
for (var diag = -size; diag <= size; diag += 2) {
_ret = _loop(diag);
if (_ret) return _ret.v;
}
if (size % 2 == 0) history.push(frontier.slice());
}
return [range.slice(start, endA, start, endB)];
}
var Span = function () {
function Span(length, data) {
_classCallCheck(this, Span);
this.length = length;
this.data = data;
}
_createClass(Span, [{
key: "cut",
value: function cut(length) {
return length == this.length ? this : new Span(length, this.data);
}
}], [{
key: "slice",
value: function slice(spans, from, to) {
if (from == to) return Span.none;
if (from == 0 && to == Span.len(spans)) return spans;
var result = [];
for (var i = 0, off = 0; off < to; i++) {
var span = spans[i],
end = off + span.length;
var overlap = Math.min(to, end) - Math.max(from, off);
if (overlap > 0) result.push(span.cut(overlap));
off = end;
}
return result;
}
}, {
key: "join",
value: function join(a, b, combine) {
if (a.length == 0) return b;
if (b.length == 0) return a;
var combined = combine(a[a.length - 1].data, b[0].data);
if (combined == null) return a.concat(b);
var result = a.slice(0, a.length - 1);
result.push(new Span(a[a.length - 1].length + b[0].length, combined));
for (var i = 1; i < b.length; i++) result.push(b[i]);
return result;
}
}, {
key: "len",
value: function len(spans) {
var len = 0;
for (var i = 0; i < spans.length; i++) len += spans[i].length;
return len;
}
}]);
return Span;
}();
Span.none = [];
var Change = function () {
function Change(fromA, toA, fromB, toB, deleted, inserted) {
_classCallCheck(this, Change);
this.fromA = fromA;
this.toA = toA;
this.fromB = fromB;
this.toB = toB;
this.deleted = deleted;
this.inserted = inserted;
}
_createClass(Change, [{
key: "lenA",
get: function get() {
return this.toA - this.fromA;
}
}, {
key: "lenB",
get: function get() {
return this.toB - this.fromB;
}
}, {
key: "slice",
value: function slice(startA, endA, startB, endB) {
if (startA == 0 && startB == 0 && endA == this.toA - this.fromA && endB == this.toB - this.fromB) return this;
return new Change(this.fromA + startA, this.fromA + endA, this.fromB + startB, this.fromB + endB, Span.slice(this.deleted, startA, endA), Span.slice(this.inserted, startB, endB));
}
}, {
key: "toJSON",
value: function toJSON() {
return this;
}
}], [{
key: "merge",
value: function merge(x, y, combine) {
if (x.length == 0) return y;
if (y.length == 0) return x;
var result = [];
for (var iX = 0, iY = 0, curX = x[0], curY = y[0];;) {
if (!curX && !curY) {
return result;
} else if (curX && (!curY || curX.toB < curY.fromA)) {
var off = iY ? y[iY - 1].toB - y[iY - 1].toA : 0;
result.push(off == 0 ? curX : new Change(curX.fromA, curX.toA, curX.fromB + off, curX.toB + off, curX.deleted, curX.inserted));
curX = iX++ == x.length ? null : x[iX];
} else if (curY && (!curX || curY.toA < curX.fromB)) {
var _off = iX ? x[iX - 1].toB - x[iX - 1].toA : 0;
result.push(_off == 0 ? curY : new Change(curY.fromA - _off, curY.toA - _off, curY.fromB, curY.toB, curY.deleted, curY.inserted));
curY = iY++ == y.length ? null : y[iY];
} else {
var pos = Math.min(curX.fromB, curY.fromA);
var fromA = Math.min(curX.fromA, curY.fromA - (iX ? x[iX - 1].toB - x[iX - 1].toA : 0)),
toA = fromA;
var fromB = Math.min(curY.fromB, curX.fromB + (iY ? y[iY - 1].toB - y[iY - 1].toA : 0)),
toB = fromB;
var deleted = Span.none,
inserted = Span.none;
var enteredX = false,
enteredY = false;
for (;;) {
var nextX = !curX ? 2e8 : pos >= curX.fromB ? curX.toB : curX.fromB;
var nextY = !curY ? 2e8 : pos >= curY.fromA ? curY.toA : curY.fromA;
var next = Math.min(nextX, nextY);
var inX = curX && pos >= curX.fromB,
inY = curY && pos >= curY.fromA;
if (!inX && !inY) break;
if (inX && pos == curX.fromB && !enteredX) {
deleted = Span.join(deleted, curX.deleted, combine);
toA += curX.lenA;
enteredX = true;
}
if (inX && !inY) {
inserted = Span.join(inserted, Span.slice(curX.inserted, pos - curX.fromB, next - curX.fromB), combine);
toB += next - pos;
}
if (inY && pos == curY.fromA && !enteredY) {
inserted = Span.join(inserted, curY.inserted, combine);
toB += curY.lenB;
enteredY = true;
}
if (inY && !inX) {
deleted = Span.join(deleted, Span.slice(curY.deleted, pos - curY.fromA, next - curY.fromA), combine);
toA += next - pos;
}
if (inX && next == curX.toB) {
curX = iX++ == x.length ? null : x[iX];
enteredX = false;
}
if (inY && next == curY.toA) {
curY = iY++ == y.length ? null : y[iY];
enteredY = false;
}
pos = next;
}
if (fromA < toA || fromB < toB) result.push(new Change(fromA, toA, fromB, toB, deleted, inserted));
}
}
}
}, {
key: "fromJSON",
value: function fromJSON(json) {
return new Change(json.fromA, json.toA, json.fromB, json.toB, json.deleted.map(function (d) {
return new Span(d.length, d.data);
}), json.inserted.map(function (d) {
return new Span(d.length, d.data);
}));
}
}]);
return Change;
}();
var letter;
try {
letter = new RegExp("[\\p{Alphabetic}_]", "u");
} catch (_) {}
var nonASCIISingleCaseWordChar = /[\u00df\u0587\u0590-\u05f4\u0600-\u06ff\u3040-\u309f\u30a0-\u30ff\u3400-\u4db5\u4e00-\u9fcc\uac00-\ud7af]/;
function isLetter(code) {
if (code < 128) return code >= 48 && code <= 57 || code >= 65 && code <= 90 || code >= 79 && code <= 122;
var ch = String.fromCharCode(code);
if (letter) return letter.test(ch);
return ch.toUpperCase() != ch.toLowerCase() || nonASCIISingleCaseWordChar.test(ch);
}
function getText(frag, start, end) {
var out = "";
function convert(frag, start, end) {
for (var i = 0, off = 0; i < frag.childCount; i++) {
var child = frag.child(i),
endOff = off + child.nodeSize;
var from = Math.max(off, start),
to = Math.min(endOff, end);
if (from < to) {
if (child.isText) {
out += child.text.slice(Math.max(0, start - off), Math.min(child.text.length, end - off));
} else if (child.isLeaf) {
out += " ";
} else {
if (from == off) out += " ";
convert(child.content, Math.max(0, from - off - 1), Math.min(child.content.size, end - off));
if (to == endOff) out += " ";
}
}
off = endOff;
}
}
convert(frag, start, end);
return out;
}
var MAX_SIMPLIFY_DISTANCE = 30;
function simplifyChanges(changes, doc) {
var result = [];
for (var i = 0; i < changes.length; i++) {
var end = changes[i].toB,
start = i;
while (i < changes.length - 1 && changes[i + 1].fromB <= end + MAX_SIMPLIFY_DISTANCE) end = changes[++i].toB;
simplifyAdjacentChanges(changes, start, i + 1, doc, result);
}
return result;
}
function simplifyAdjacentChanges(changes, from, to, doc, target) {
var start = Math.max(0, changes[from].fromB - MAX_SIMPLIFY_DISTANCE);
var end = Math.min(doc.content.size, changes[to - 1].toB + MAX_SIMPLIFY_DISTANCE);
var text = getText(doc.content, start, end);
for (var i = from; i < to; i++) {
var startI = i,
last = changes[i],
deleted = last.lenA,
inserted = last.lenB;
while (i < to - 1) {
var next = changes[i + 1],
boundary = false;
var prevLetter = last.toB == end ? false : isLetter(text.charCodeAt(last.toB - 1 - start));
for (var pos = last.toB; !boundary && pos < next.fromB; pos++) {
var nextLetter = pos == end ? false : isLetter(text.charCodeAt(pos - start));
if ((!prevLetter || !nextLetter) && pos != changes[startI].fromB) boundary = true;
prevLetter = nextLetter;
}
if (boundary) break;
deleted += next.lenA;
inserted += next.lenB;
last = next;
i++;
}
if (inserted > 0 && deleted > 0 && !(inserted == 1 && deleted == 1)) {
var _from = changes[startI].fromB,
_to = changes[i].toB;
if (_from < end && isLetter(text.charCodeAt(_from - start))) while (_from > start && isLetter(text.charCodeAt(_from - 1 - start))) _from--;
if (_to > start && isLetter(text.charCodeAt(_to - 1 - start))) while (_to < end && isLetter(text.charCodeAt(_to - start))) _to++;
var joined = fillChange(changes.slice(startI, i + 1), _from, _to);
var _last = target.length ? target[target.length - 1] : null;
if (_last && _last.toA == joined.fromA) target[target.length - 1] = new Change(_last.fromA, joined.toA, _last.fromB, joined.toB, _last.deleted.concat(joined.deleted), _last.inserted.concat(joined.inserted));else target.push(joined);
} else {
for (var j = startI; j <= i; j++) target.push(changes[j]);
}
}
return changes;
}
function combine(a, b) {
return a === b ? a : null;
}
function fillChange(changes, fromB, toB) {
var fromA = changes[0].fromA - (changes[0].fromB - fromB);
var last = changes[changes.length - 1];
var toA = last.toA + (toB - last.toB);
var deleted = Span.none,
inserted = Span.none;
var delData = (changes[0].deleted.length ? changes[0].deleted : changes[0].inserted)[0].data;
var insData = (changes[0].inserted.length ? changes[0].inserted : changes[0].deleted)[0].data;
for (var posA = fromA, posB = fromB, i = 0;; i++) {
var next = i == changes.length ? null : changes[i];
var endA = next ? next.fromA : toA,
endB = next ? next.fromB : toB;
if (endA > posA) deleted = Span.join(deleted, [new Span(endA - posA, delData)], combine);
if (endB > posB) inserted = Span.join(inserted, [new Span(endB - posB, insData)], combine);
if (!next) break;
deleted = Span.join(deleted, next.deleted, combine);
inserted = Span.join(inserted, next.inserted, combine);
if (deleted.length) delData = deleted[deleted.length - 1].data;
if (inserted.length) insData = inserted[inserted.length - 1].data;
posA = next.toA;
posB = next.toB;
}
return new Change(fromA, toA, fromB, toB, deleted, inserted);
}
var ChangeSet = function () {
function ChangeSet(config, changes) {
_classCallCheck(this, ChangeSet);
this.config = config;
this.changes = changes;
}
_createClass(ChangeSet, [{
key: "addSteps",
value: function addSteps(newDoc, maps, data) {
var _this = this;
var stepChanges = [];
var _loop2 = function _loop2() {
var d = Array.isArray(data) ? data[i] : data;
var off = 0;
maps[i].forEach(function (fromA, toA, fromB, toB) {
stepChanges.push(new Change(fromA + off, toA + off, fromB, toB, fromA == toA ? Span.none : [new Span(toA - fromA, d)], fromB == toB ? Span.none : [new Span(toB - fromB, d)]));
off = toB - fromB - (toA - fromA);
});
};
for (var i = 0; i < maps.length; i++) {
_loop2();
}
if (stepChanges.length == 0) return this;
var newChanges = mergeAll(stepChanges, this.config.combine);
var changes = Change.merge(this.changes, newChanges, this.config.combine);
var updated = changes;
var _loop3 = function _loop3(_i3) {
var change = updated[_i3];
if (change.fromA == change.toA || change.fromB == change.toB || !newChanges.some(function (r) {
return r.toB > change.fromB && r.fromB < change.toB;
})) {
_i2 = _i3;
return 0;
}
var diff = computeDiff(_this.config.doc.content, newDoc.content, change, _this.config.encoder);
if (diff.length == 1 && diff[0].fromB == 0 && diff[0].toB == change.toB - change.fromB) {
_i2 = _i3;
return 0;
}
if (updated == changes) updated = changes.slice();
if (diff.length == 1) {
updated[_i3] = diff[0];
} else {
var _updated;
(_updated = updated).splice.apply(_updated, [_i3, 1].concat(_toConsumableArray(diff)));
_i3 += diff.length - 1;
}
_i2 = _i3;
},
_ret2;
for (var _i2 = 0; _i2 < updated.length; _i2++) {
_ret2 = _loop3(_i2);
if (_ret2 === 0) continue;
}
return new ChangeSet(this.config, updated);
}
}, {
key: "startDoc",
get: function get() {
return this.config.doc;
}
}, {
key: "map",
value: function map(f) {
var mapSpan = function mapSpan(span) {
var newData = f(span);
return newData === span.data ? span : new Span(span.length, newData);
};
return new ChangeSet(this.config, this.changes.map(function (ch) {
return new Change(ch.fromA, ch.toA, ch.fromB, ch.toB, ch.deleted.map(mapSpan), ch.inserted.map(mapSpan));
}));
}
}, {
key: "changedRange",
value: function changedRange(b, maps) {
if (b == this) return null;
var touched = maps && touchedRange(maps);
var moved = touched ? touched.toB - touched.fromB - (touched.toA - touched.fromA) : 0;
function map(p) {
return !touched || p <= touched.fromA ? p : p + moved;
}
var from = touched ? touched.fromB : 2e8,
to = touched ? touched.toB : -2e8;
function add(start) {
var end = arguments.length > 1 && arguments[1] !== undefined ? arguments[1] : start;
from = Math.min(start, from);
to = Math.max(end, to);
}
var rA = this.changes,
rB = b.changes;
for (var iA = 0, iB = 0; iA < rA.length && iB < rB.length;) {
var rangeA = rA[iA],
rangeB = rB[iB];
if (rangeA && rangeB && sameRanges(rangeA, rangeB, map)) {
iA++;
iB++;
} else if (rangeB && (!rangeA || map(rangeA.fromB) >= rangeB.fromB)) {
add(rangeB.fromB, rangeB.toB);
iB++;
} else {
add(map(rangeA.fromB), map(rangeA.toB));
iA++;
}
}
return from <= to ? {
from: from,
to: to
} : null;
}
}], [{
key: "create",
value: function create(doc) {
var combine = arguments.length > 1 && arguments[1] !== undefined ? arguments[1] : function (a, b) {
return a === b ? a : null;
};
var tokenEncoder = arguments.length > 2 && arguments[2] !== undefined ? arguments[2] : DefaultEncoder;
var changes = arguments.length > 3 && arguments[3] !== undefined ? arguments[3] : [];
return new ChangeSet({
combine: combine,
doc: doc,
encoder: tokenEncoder
}, changes);
}
}]);
return ChangeSet;
}();
ChangeSet.computeDiff = computeDiff;
function mergeAll(ranges, combine) {
var start = arguments.length > 2 && arguments[2] !== undefined ? arguments[2] : 0;
var end = arguments.length > 3 && arguments[3] !== undefined ? arguments[3] : ranges.length;
if (end == start + 1) return [ranges[start]];
var mid = start + end >> 1;
return Change.merge(mergeAll(ranges, combine, start, mid), mergeAll(ranges, combine, mid, end), combine);
}
function endRange(maps) {
var from = 2e8,
to = -2e8;
for (var i = 0; i < maps.length; i++) {
var map = maps[i];
if (from != 2e8) {
from = map.map(from, -1);
to = map.map(to, 1);
}
map.forEach(function (_s, _e, start, end) {
from = Math.min(from, start);
to = Math.max(to, end);
});
}
return from == 2e8 ? null : {
from: from,
to: to
};
}
function touchedRange(maps) {
var b = endRange(maps);
if (!b) return null;
var a = endRange(maps.map(function (m) {
return m.invert();
}).reverse());
return {
fromA: a.from,
toA: a.to,
fromB: b.from,
toB: b.to
};
}
function sameRanges(a, b, map) {
return map(a.fromB) == b.fromB && map(a.toB) == b.toB && sameSpans(a.deleted, b.deleted) && sameSpans(a.inserted, b.inserted);
}
function sameSpans(a, b) {
if (a.length != b.length) return false;
for (var i = 0; i < a.length; i++) if (a[i].length != b[i].length || a[i].data !== b[i].data) return false;
return true;
}
exports.Change = Change;
exports.ChangeSet = ChangeSet;
exports.Span = Span;
exports.simplifyChanges = simplifyChanges;

186
node_modules/prosemirror-changeset/dist/index.d.cts generated vendored Normal file
View File

@@ -0,0 +1,186 @@
import { Mark, Node } from 'prosemirror-model';
import { StepMap } from 'prosemirror-transform';
/**
Stores metadata for a part of a change.
*/
declare class Span<Data = any> {
/**
The length of this span.
*/
readonly length: number;
/**
The data associated with this span.
*/
readonly data: Data;
}
/**
A replaced range with metadata associated with it.
*/
declare class Change<Data = any> {
/**
The start of the range deleted/replaced in the old document.
*/
readonly fromA: number;
/**
The end of the range in the old document.
*/
readonly toA: number;
/**
The start of the range inserted in the new document.
*/
readonly fromB: number;
/**
The end of the range in the new document.
*/
readonly toB: number;
/**
Data associated with the deleted content. The length of these
spans adds up to `this.toA - this.fromA`.
*/
readonly deleted: readonly Span<Data>[];
/**
Data associated with the inserted content. Length adds up to
`this.toB - this.fromB`.
*/
readonly inserted: readonly Span<Data>[];
/**
This merges two changesets (the end document of x should be the
start document of y) into a single one spanning the start of x to
the end of y.
*/
static merge<Data>(x: readonly Change<Data>[], y: readonly Change<Data>[], combine: (dataA: Data, dataB: Data) => Data): readonly Change<Data>[];
/**
Deserialize a change from JSON format.
*/
static fromJSON<Data>(json: ChangeJSON<Data>): Change<Data>;
/**
Returns a JSON-serializeable object to represent this change.
*/
toJSON(): ChangeJSON<Data>;
}
/**
JSON-serialized form of a change.
*/
type ChangeJSON<Data> = {
fromA: number;
toA: number;
fromB: number;
toB: number;
deleted: readonly {
length: number;
data: Data;
}[];
inserted: readonly {
length: number;
data: Data;
}[];
};
/**
A token encoder can be passed when creating a `ChangeSet` in order
to influence the way the library runs its diffing algorithm. The
encoder determines how document tokens (such as nodes and
characters) are encoded and compared.
Note that both the encoding and the comparison may run a lot, and
doing non-trivial work in these functions could impact
performance.
*/
interface TokenEncoder<T> {
/**
Encode a given character, with the given marks applied.
*/
encodeCharacter(char: number, marks: readonly Mark[]): T;
/**
Encode the start of a node or, if this is a leaf node, the
entire node.
*/
encodeNodeStart(node: Node): T;
/**
Encode the end token for the given node. It is valid to encode
every end token in the same way.
*/
encodeNodeEnd(node: Node): T;
/**
Compare the given tokens. Should return true when they count as
equal.
*/
compareTokens(a: T, b: T): boolean;
}
/**
Simplifies a set of changes for presentation. This makes the
assumption that having both insertions and deletions within a word
is confusing, and, when such changes occur without a word boundary
between them, they should be expanded to cover the entire set of
words (in the new document) they touch. An exception is made for
single-character replacements.
*/
declare function simplifyChanges(changes: readonly Change[], doc: Node): Change<any>[];
/**
A change set tracks the changes to a document from a given point
in the past. It condenses a number of step maps down to a flat
sequence of replacements, and simplifies replacments that
partially undo themselves by comparing their content.
*/
declare class ChangeSet<Data = any> {
/**
Replaced regions.
*/
readonly changes: readonly Change<Data>[];
/**
Computes a new changeset by adding the given step maps and
metadata (either as an array, per-map, or as a single value to be
associated with all maps) to the current set. Will not mutate the
old set.
Note that due to simplification that happens after each add,
incrementally adding steps might create a different final set
than adding all those changes at once, since different document
tokens might be matched during simplification depending on the
boundaries of the current changed ranges.
*/
addSteps(newDoc: Node, maps: readonly StepMap[], data: Data | readonly Data[]): ChangeSet<Data>;
/**
The starting document of the change set.
*/
get startDoc(): Node;
/**
Map the span's data values in the given set through a function
and construct a new set with the resulting data.
*/
map(f: (range: Span<Data>) => Data): ChangeSet<Data>;
/**
Compare two changesets and return the range in which they are
changed, if any. If the document changed between the maps, pass
the maps for the steps that changed it as second argument, and
make sure the method is called on the old set and passed the new
set. The returned positions will be in new document coordinates.
*/
changedRange(b: ChangeSet, maps?: readonly StepMap[]): {
from: number;
to: number;
} | null;
/**
Create a changeset with the given base object and configuration.
The `combine` function is used to compare and combine metadata—it
should return null when metadata isn't compatible, and a combined
version for a merged range when it is.
When given, a token encoder determines how document tokens are
serialized and compared when diffing the content produced by
changes. The default is to just compare nodes by name and text
by character, ignoring marks and attributes.
To serialize a change set, you can store its document and
change array as JSON, and then pass the deserialized (via
[`Change.fromJSON`](https://prosemirror.net/docs/ref/#changes.Change^fromJSON)) set of changes
as fourth argument to `create` to recreate the set.
*/
static create<Data = any>(doc: Node, combine?: (dataA: Data, dataB: Data) => Data, tokenEncoder?: TokenEncoder<any>, changes?: readonly Change<Data>[]): ChangeSet<Data>;
}
export { Change, type ChangeJSON, ChangeSet, Span, type TokenEncoder, simplifyChanges };

186
node_modules/prosemirror-changeset/dist/index.d.ts generated vendored Normal file
View File

@@ -0,0 +1,186 @@
import { Mark, Node } from 'prosemirror-model';
import { StepMap } from 'prosemirror-transform';
/**
Stores metadata for a part of a change.
*/
declare class Span<Data = any> {
/**
The length of this span.
*/
readonly length: number;
/**
The data associated with this span.
*/
readonly data: Data;
}
/**
A replaced range with metadata associated with it.
*/
declare class Change<Data = any> {
/**
The start of the range deleted/replaced in the old document.
*/
readonly fromA: number;
/**
The end of the range in the old document.
*/
readonly toA: number;
/**
The start of the range inserted in the new document.
*/
readonly fromB: number;
/**
The end of the range in the new document.
*/
readonly toB: number;
/**
Data associated with the deleted content. The length of these
spans adds up to `this.toA - this.fromA`.
*/
readonly deleted: readonly Span<Data>[];
/**
Data associated with the inserted content. Length adds up to
`this.toB - this.fromB`.
*/
readonly inserted: readonly Span<Data>[];
/**
This merges two changesets (the end document of x should be the
start document of y) into a single one spanning the start of x to
the end of y.
*/
static merge<Data>(x: readonly Change<Data>[], y: readonly Change<Data>[], combine: (dataA: Data, dataB: Data) => Data): readonly Change<Data>[];
/**
Deserialize a change from JSON format.
*/
static fromJSON<Data>(json: ChangeJSON<Data>): Change<Data>;
/**
Returns a JSON-serializeable object to represent this change.
*/
toJSON(): ChangeJSON<Data>;
}
/**
JSON-serialized form of a change.
*/
type ChangeJSON<Data> = {
fromA: number;
toA: number;
fromB: number;
toB: number;
deleted: readonly {
length: number;
data: Data;
}[];
inserted: readonly {
length: number;
data: Data;
}[];
};
/**
A token encoder can be passed when creating a `ChangeSet` in order
to influence the way the library runs its diffing algorithm. The
encoder determines how document tokens (such as nodes and
characters) are encoded and compared.
Note that both the encoding and the comparison may run a lot, and
doing non-trivial work in these functions could impact
performance.
*/
interface TokenEncoder<T> {
/**
Encode a given character, with the given marks applied.
*/
encodeCharacter(char: number, marks: readonly Mark[]): T;
/**
Encode the start of a node or, if this is a leaf node, the
entire node.
*/
encodeNodeStart(node: Node): T;
/**
Encode the end token for the given node. It is valid to encode
every end token in the same way.
*/
encodeNodeEnd(node: Node): T;
/**
Compare the given tokens. Should return true when they count as
equal.
*/
compareTokens(a: T, b: T): boolean;
}
/**
Simplifies a set of changes for presentation. This makes the
assumption that having both insertions and deletions within a word
is confusing, and, when such changes occur without a word boundary
between them, they should be expanded to cover the entire set of
words (in the new document) they touch. An exception is made for
single-character replacements.
*/
declare function simplifyChanges(changes: readonly Change[], doc: Node): Change<any>[];
/**
A change set tracks the changes to a document from a given point
in the past. It condenses a number of step maps down to a flat
sequence of replacements, and simplifies replacments that
partially undo themselves by comparing their content.
*/
declare class ChangeSet<Data = any> {
/**
Replaced regions.
*/
readonly changes: readonly Change<Data>[];
/**
Computes a new changeset by adding the given step maps and
metadata (either as an array, per-map, or as a single value to be
associated with all maps) to the current set. Will not mutate the
old set.
Note that due to simplification that happens after each add,
incrementally adding steps might create a different final set
than adding all those changes at once, since different document
tokens might be matched during simplification depending on the
boundaries of the current changed ranges.
*/
addSteps(newDoc: Node, maps: readonly StepMap[], data: Data | readonly Data[]): ChangeSet<Data>;
/**
The starting document of the change set.
*/
get startDoc(): Node;
/**
Map the span's data values in the given set through a function
and construct a new set with the resulting data.
*/
map(f: (range: Span<Data>) => Data): ChangeSet<Data>;
/**
Compare two changesets and return the range in which they are
changed, if any. If the document changed between the maps, pass
the maps for the steps that changed it as second argument, and
make sure the method is called on the old set and passed the new
set. The returned positions will be in new document coordinates.
*/
changedRange(b: ChangeSet, maps?: readonly StepMap[]): {
from: number;
to: number;
} | null;
/**
Create a changeset with the given base object and configuration.
The `combine` function is used to compare and combine metadata—it
should return null when metadata isn't compatible, and a combined
version for a merged range when it is.
When given, a token encoder determines how document tokens are
serialized and compared when diffing the content produced by
changes. The default is to just compare nodes by name and text
by character, ignoring marks and attributes.
To serialize a change set, you can store its document and
change array as JSON, and then pass the deserialized (via
[`Change.fromJSON`](https://prosemirror.net/docs/ref/#changes.Change^fromJSON)) set of changes
as fourth argument to `create` to recreate the set.
*/
static create<Data = any>(doc: Node, combine?: (dataA: Data, dataB: Data) => Data, tokenEncoder?: TokenEncoder<any>, changes?: readonly Change<Data>[]): ChangeSet<Data>;
}
export { Change, type ChangeJSON, ChangeSet, Span, type TokenEncoder, simplifyChanges };

715
node_modules/prosemirror-changeset/dist/index.js generated vendored Normal file
View File

@@ -0,0 +1,715 @@
function typeID(type) {
let cache = type.schema.cached.changeSetIDs || (type.schema.cached.changeSetIDs = Object.create(null));
let id = cache[type.name];
if (id == null)
cache[type.name] = id = Object.keys(type.schema.nodes).indexOf(type.name) + 1;
return id;
}
// The default token encoder, which encodes node open tokens are
// encoded as strings holding the node name, characters as their
// character code, and node close tokens as negative numbers.
const DefaultEncoder = {
encodeCharacter: char => char,
encodeNodeStart: node => node.type.name,
encodeNodeEnd: node => -typeID(node.type),
compareTokens: (a, b) => a === b
};
// Convert the given range of a fragment to tokens.
function tokens(frag, encoder, start, end, target) {
for (let i = 0, off = 0; i < frag.childCount; i++) {
let child = frag.child(i), endOff = off + child.nodeSize;
let from = Math.max(off, start), to = Math.min(endOff, end);
if (from < to) {
if (child.isText) {
for (let j = from; j < to; j++)
target.push(encoder.encodeCharacter(child.text.charCodeAt(j - off), child.marks));
}
else if (child.isLeaf) {
target.push(encoder.encodeNodeStart(child));
}
else {
if (from == off)
target.push(encoder.encodeNodeStart(child));
tokens(child.content, encoder, Math.max(off + 1, from) - off - 1, Math.min(endOff - 1, to) - off - 1, target);
if (to == endOff)
target.push(encoder.encodeNodeEnd(child));
}
}
off = endOff;
}
return target;
}
// The code below will refuse to compute a diff with more than 5000
// insertions or deletions, which takes about 300ms to reach on my
// machine. This is a safeguard against runaway computations.
const MAX_DIFF_SIZE = 5000;
// This obscure mess of constants computes the minimum length of an
// unchanged range (not at the start/end of the compared content). The
// idea is to make it higher in bigger replacements, so that you don't
// get a diff soup of coincidentally identical letters when replacing
// a paragraph.
function minUnchanged(sizeA, sizeB) {
return Math.min(15, Math.max(2, Math.floor(Math.max(sizeA, sizeB) / 10)));
}
function computeDiff(fragA, fragB, range, encoder = DefaultEncoder) {
let tokA = tokens(fragA, encoder, range.fromA, range.toA, []);
let tokB = tokens(fragB, encoder, range.fromB, range.toB, []);
// Scan from both sides to cheaply eliminate work
let start = 0, endA = tokA.length, endB = tokB.length;
let cmp = encoder.compareTokens;
while (start < tokA.length && start < tokB.length && cmp(tokA[start], tokB[start]))
start++;
if (start == tokA.length && start == tokB.length)
return [];
while (endA > start && endB > start && cmp(tokA[endA - 1], tokB[endB - 1]))
endA--, endB--;
// If the result is simple _or_ too big to cheaply compute, return
// the remaining region as the diff
if (endA == start || endB == start || (endA == endB && endA == start + 1))
return [range.slice(start, endA, start, endB)];
// This is an implementation of Myers' diff algorithm
// See https://neil.fraser.name/writing/diff/myers.pdf and
// https://blog.jcoglan.com/2017/02/12/the-myers-diff-algorithm-part-1/
let lenA = endA - start, lenB = endB - start;
let max = Math.min(MAX_DIFF_SIZE, lenA + lenB), off = max + 1;
let history = [];
let frontier = [];
for (let len = off * 2, i = 0; i < len; i++)
frontier[i] = -1;
for (let size = 0; size <= max; size++) {
for (let diag = -size; diag <= size; diag += 2) {
let next = frontier[diag + 1 + max], prev = frontier[diag - 1 + max];
let x = next < prev ? prev : next + 1, y = x + diag;
while (x < lenA && y < lenB && cmp(tokA[start + x], tokB[start + y]))
x++, y++;
frontier[diag + max] = x;
// Found a match
if (x >= lenA && y >= lenB) {
// Trace back through the history to build up a set of changed ranges.
let diff = [], minSpan = minUnchanged(endA - start, endB - start);
// Used to add steps to a diff one at a time, back to front, merging
// ones that are less than minSpan tokens apart
let fromA = -1, toA = -1, fromB = -1, toB = -1;
let add = (fA, tA, fB, tB) => {
if (fromA > -1 && fromA < tA + minSpan) {
fromA = fA;
fromB = fB;
}
else {
if (fromA > -1)
diff.push(range.slice(fromA, toA, fromB, toB));
fromA = fA;
toA = tA;
fromB = fB;
toB = tB;
}
};
for (let i = size - 1; i >= 0; i--) {
let next = frontier[diag + 1 + max], prev = frontier[diag - 1 + max];
if (next < prev) { // Deletion
diag--;
x = prev + start;
y = x + diag;
add(x, x, y, y + 1);
}
else { // Insertion
diag++;
x = next + start;
y = x + diag;
add(x, x + 1, y, y);
}
frontier = history[i >> 1];
}
if (fromA > -1)
diff.push(range.slice(fromA, toA, fromB, toB));
return diff.reverse();
}
}
// Since only either odd or even diagonals are read from each
// frontier, we only copy them every other iteration.
if (size % 2 == 0)
history.push(frontier.slice());
}
// The loop exited, meaning the maximum amount of work was done.
// Just return a change spanning the entire range.
return [range.slice(start, endA, start, endB)];
}
/**
Stores metadata for a part of a change.
*/
class Span {
/**
@internal
*/
constructor(
/**
The length of this span.
*/
length,
/**
The data associated with this span.
*/
data) {
this.length = length;
this.data = data;
}
/**
@internal
*/
cut(length) {
return length == this.length ? this : new Span(length, this.data);
}
/**
@internal
*/
static slice(spans, from, to) {
if (from == to)
return Span.none;
if (from == 0 && to == Span.len(spans))
return spans;
let result = [];
for (let i = 0, off = 0; off < to; i++) {
let span = spans[i], end = off + span.length;
let overlap = Math.min(to, end) - Math.max(from, off);
if (overlap > 0)
result.push(span.cut(overlap));
off = end;
}
return result;
}
/**
@internal
*/
static join(a, b, combine) {
if (a.length == 0)
return b;
if (b.length == 0)
return a;
let combined = combine(a[a.length - 1].data, b[0].data);
if (combined == null)
return a.concat(b);
let result = a.slice(0, a.length - 1);
result.push(new Span(a[a.length - 1].length + b[0].length, combined));
for (let i = 1; i < b.length; i++)
result.push(b[i]);
return result;
}
/**
@internal
*/
static len(spans) {
let len = 0;
for (let i = 0; i < spans.length; i++)
len += spans[i].length;
return len;
}
}
/**
@internal
*/
Span.none = [];
/**
A replaced range with metadata associated with it.
*/
class Change {
/**
@internal
*/
constructor(
/**
The start of the range deleted/replaced in the old document.
*/
fromA,
/**
The end of the range in the old document.
*/
toA,
/**
The start of the range inserted in the new document.
*/
fromB,
/**
The end of the range in the new document.
*/
toB,
/**
Data associated with the deleted content. The length of these
spans adds up to `this.toA - this.fromA`.
*/
deleted,
/**
Data associated with the inserted content. Length adds up to
`this.toB - this.fromB`.
*/
inserted) {
this.fromA = fromA;
this.toA = toA;
this.fromB = fromB;
this.toB = toB;
this.deleted = deleted;
this.inserted = inserted;
}
/**
@internal
*/
get lenA() { return this.toA - this.fromA; }
/**
@internal
*/
get lenB() { return this.toB - this.fromB; }
/**
@internal
*/
slice(startA, endA, startB, endB) {
if (startA == 0 && startB == 0 && endA == this.toA - this.fromA &&
endB == this.toB - this.fromB)
return this;
return new Change(this.fromA + startA, this.fromA + endA, this.fromB + startB, this.fromB + endB, Span.slice(this.deleted, startA, endA), Span.slice(this.inserted, startB, endB));
}
/**
This merges two changesets (the end document of x should be the
start document of y) into a single one spanning the start of x to
the end of y.
*/
static merge(x, y, combine) {
if (x.length == 0)
return y;
if (y.length == 0)
return x;
let result = [];
// Iterate over both sets in parallel, using the middle coordinate
// system (B in x, A in y) to synchronize.
for (let iX = 0, iY = 0, curX = x[0], curY = y[0];;) {
if (!curX && !curY) {
return result;
}
else if (curX && (!curY || curX.toB < curY.fromA)) { // curX entirely in front of curY
let off = iY ? y[iY - 1].toB - y[iY - 1].toA : 0;
result.push(off == 0 ? curX :
new Change(curX.fromA, curX.toA, curX.fromB + off, curX.toB + off, curX.deleted, curX.inserted));
curX = iX++ == x.length ? null : x[iX];
}
else if (curY && (!curX || curY.toA < curX.fromB)) { // curY entirely in front of curX
let off = iX ? x[iX - 1].toB - x[iX - 1].toA : 0;
result.push(off == 0 ? curY :
new Change(curY.fromA - off, curY.toA - off, curY.fromB, curY.toB, curY.deleted, curY.inserted));
curY = iY++ == y.length ? null : y[iY];
}
else { // Touch, need to merge
// The rules for merging ranges are that deletions from the
// old set and insertions from the new are kept. Areas of the
// middle document covered by a but not by b are insertions
// from a that need to be added, and areas covered by b but
// not a are deletions from b that need to be added.
let pos = Math.min(curX.fromB, curY.fromA);
let fromA = Math.min(curX.fromA, curY.fromA - (iX ? x[iX - 1].toB - x[iX - 1].toA : 0)), toA = fromA;
let fromB = Math.min(curY.fromB, curX.fromB + (iY ? y[iY - 1].toB - y[iY - 1].toA : 0)), toB = fromB;
let deleted = Span.none, inserted = Span.none;
// Used to prevent appending ins/del range for the same Change twice
let enteredX = false, enteredY = false;
// Need to have an inner loop since any number of further
// ranges might be touching this group
for (;;) {
let nextX = !curX ? 2e8 : pos >= curX.fromB ? curX.toB : curX.fromB;
let nextY = !curY ? 2e8 : pos >= curY.fromA ? curY.toA : curY.fromA;
let next = Math.min(nextX, nextY);
let inX = curX && pos >= curX.fromB, inY = curY && pos >= curY.fromA;
if (!inX && !inY)
break;
if (inX && pos == curX.fromB && !enteredX) {
deleted = Span.join(deleted, curX.deleted, combine);
toA += curX.lenA;
enteredX = true;
}
if (inX && !inY) {
inserted = Span.join(inserted, Span.slice(curX.inserted, pos - curX.fromB, next - curX.fromB), combine);
toB += next - pos;
}
if (inY && pos == curY.fromA && !enteredY) {
inserted = Span.join(inserted, curY.inserted, combine);
toB += curY.lenB;
enteredY = true;
}
if (inY && !inX) {
deleted = Span.join(deleted, Span.slice(curY.deleted, pos - curY.fromA, next - curY.fromA), combine);
toA += next - pos;
}
if (inX && next == curX.toB) {
curX = iX++ == x.length ? null : x[iX];
enteredX = false;
}
if (inY && next == curY.toA) {
curY = iY++ == y.length ? null : y[iY];
enteredY = false;
}
pos = next;
}
if (fromA < toA || fromB < toB)
result.push(new Change(fromA, toA, fromB, toB, deleted, inserted));
}
}
}
/**
Deserialize a change from JSON format.
*/
static fromJSON(json) {
return new Change(json.fromA, json.toA, json.fromB, json.toB, json.deleted.map(d => new Span(d.length, d.data)), json.inserted.map(d => new Span(d.length, d.data)));
}
/**
Returns a JSON-serializeable object to represent this change.
*/
toJSON() { return this; }
}
let letter;
// If the runtime support unicode properties in regexps, that's a good
// source of info on whether something is a letter.
try {
letter = new RegExp("[\\p{Alphabetic}_]", "u");
}
catch (_) { }
// Otherwise, we see if the character changes when upper/lowercased,
// or if it is part of these common single-case scripts.
const nonASCIISingleCaseWordChar = /[\u00df\u0587\u0590-\u05f4\u0600-\u06ff\u3040-\u309f\u30a0-\u30ff\u3400-\u4db5\u4e00-\u9fcc\uac00-\ud7af]/;
function isLetter(code) {
if (code < 128)
return code >= 48 && code <= 57 || code >= 65 && code <= 90 || code >= 79 && code <= 122;
let ch = String.fromCharCode(code);
if (letter)
return letter.test(ch);
return ch.toUpperCase() != ch.toLowerCase() || nonASCIISingleCaseWordChar.test(ch);
}
// Convert a range of document into a string, so that we can easily
// access characters at a given position. Treat non-text tokens as
// spaces so that they aren't considered part of a word.
function getText(frag, start, end) {
let out = "";
function convert(frag, start, end) {
for (let i = 0, off = 0; i < frag.childCount; i++) {
let child = frag.child(i), endOff = off + child.nodeSize;
let from = Math.max(off, start), to = Math.min(endOff, end);
if (from < to) {
if (child.isText) {
out += child.text.slice(Math.max(0, start - off), Math.min(child.text.length, end - off));
}
else if (child.isLeaf) {
out += " ";
}
else {
if (from == off)
out += " ";
convert(child.content, Math.max(0, from - off - 1), Math.min(child.content.size, end - off));
if (to == endOff)
out += " ";
}
}
off = endOff;
}
}
convert(frag, start, end);
return out;
}
// The distance changes have to be apart for us to not consider them
// candidates for merging.
const MAX_SIMPLIFY_DISTANCE = 30;
/**
Simplifies a set of changes for presentation. This makes the
assumption that having both insertions and deletions within a word
is confusing, and, when such changes occur without a word boundary
between them, they should be expanded to cover the entire set of
words (in the new document) they touch. An exception is made for
single-character replacements.
*/
function simplifyChanges(changes, doc) {
let result = [];
for (let i = 0; i < changes.length; i++) {
let end = changes[i].toB, start = i;
while (i < changes.length - 1 && changes[i + 1].fromB <= end + MAX_SIMPLIFY_DISTANCE)
end = changes[++i].toB;
simplifyAdjacentChanges(changes, start, i + 1, doc, result);
}
return result;
}
function simplifyAdjacentChanges(changes, from, to, doc, target) {
let start = Math.max(0, changes[from].fromB - MAX_SIMPLIFY_DISTANCE);
let end = Math.min(doc.content.size, changes[to - 1].toB + MAX_SIMPLIFY_DISTANCE);
let text = getText(doc.content, start, end);
for (let i = from; i < to; i++) {
let startI = i, last = changes[i], deleted = last.lenA, inserted = last.lenB;
while (i < to - 1) {
let next = changes[i + 1], boundary = false;
let prevLetter = last.toB == end ? false : isLetter(text.charCodeAt(last.toB - 1 - start));
for (let pos = last.toB; !boundary && pos < next.fromB; pos++) {
let nextLetter = pos == end ? false : isLetter(text.charCodeAt(pos - start));
if ((!prevLetter || !nextLetter) && pos != changes[startI].fromB)
boundary = true;
prevLetter = nextLetter;
}
if (boundary)
break;
deleted += next.lenA;
inserted += next.lenB;
last = next;
i++;
}
if (inserted > 0 && deleted > 0 && !(inserted == 1 && deleted == 1)) {
let from = changes[startI].fromB, to = changes[i].toB;
if (from < end && isLetter(text.charCodeAt(from - start)))
while (from > start && isLetter(text.charCodeAt(from - 1 - start)))
from--;
if (to > start && isLetter(text.charCodeAt(to - 1 - start)))
while (to < end && isLetter(text.charCodeAt(to - start)))
to++;
let joined = fillChange(changes.slice(startI, i + 1), from, to);
let last = target.length ? target[target.length - 1] : null;
if (last && last.toA == joined.fromA)
target[target.length - 1] = new Change(last.fromA, joined.toA, last.fromB, joined.toB, last.deleted.concat(joined.deleted), last.inserted.concat(joined.inserted));
else
target.push(joined);
}
else {
for (let j = startI; j <= i; j++)
target.push(changes[j]);
}
}
return changes;
}
function combine(a, b) { return a === b ? a : null; }
function fillChange(changes, fromB, toB) {
let fromA = changes[0].fromA - (changes[0].fromB - fromB);
let last = changes[changes.length - 1];
let toA = last.toA + (toB - last.toB);
let deleted = Span.none, inserted = Span.none;
let delData = (changes[0].deleted.length ? changes[0].deleted : changes[0].inserted)[0].data;
let insData = (changes[0].inserted.length ? changes[0].inserted : changes[0].deleted)[0].data;
for (let posA = fromA, posB = fromB, i = 0;; i++) {
let next = i == changes.length ? null : changes[i];
let endA = next ? next.fromA : toA, endB = next ? next.fromB : toB;
if (endA > posA)
deleted = Span.join(deleted, [new Span(endA - posA, delData)], combine);
if (endB > posB)
inserted = Span.join(inserted, [new Span(endB - posB, insData)], combine);
if (!next)
break;
deleted = Span.join(deleted, next.deleted, combine);
inserted = Span.join(inserted, next.inserted, combine);
if (deleted.length)
delData = deleted[deleted.length - 1].data;
if (inserted.length)
insData = inserted[inserted.length - 1].data;
posA = next.toA;
posB = next.toB;
}
return new Change(fromA, toA, fromB, toB, deleted, inserted);
}
/**
A change set tracks the changes to a document from a given point
in the past. It condenses a number of step maps down to a flat
sequence of replacements, and simplifies replacments that
partially undo themselves by comparing their content.
*/
class ChangeSet {
/**
@internal
*/
constructor(
/**
@internal
*/
config,
/**
Replaced regions.
*/
changes) {
this.config = config;
this.changes = changes;
}
/**
Computes a new changeset by adding the given step maps and
metadata (either as an array, per-map, or as a single value to be
associated with all maps) to the current set. Will not mutate the
old set.
Note that due to simplification that happens after each add,
incrementally adding steps might create a different final set
than adding all those changes at once, since different document
tokens might be matched during simplification depending on the
boundaries of the current changed ranges.
*/
addSteps(newDoc, maps, data) {
// This works by inspecting the position maps for the changes,
// which indicate what parts of the document were replaced by new
// content, and the size of that new content. It uses these to
// build up Change objects.
//
// These change objects are put in sets and merged together using
// Change.merge, giving us the changes created by the new steps.
// Those changes can then be merged with the existing set of
// changes.
//
// For each change that was touched by the new steps, we recompute
// a diff to try to minimize the change by dropping matching
// pieces of the old and new document from the change.
let stepChanges = [];
// Add spans for new steps.
for (let i = 0; i < maps.length; i++) {
let d = Array.isArray(data) ? data[i] : data;
let off = 0;
maps[i].forEach((fromA, toA, fromB, toB) => {
stepChanges.push(new Change(fromA + off, toA + off, fromB, toB, fromA == toA ? Span.none : [new Span(toA - fromA, d)], fromB == toB ? Span.none : [new Span(toB - fromB, d)]));
off = (toB - fromB) - (toA - fromA);
});
}
if (stepChanges.length == 0)
return this;
let newChanges = mergeAll(stepChanges, this.config.combine);
let changes = Change.merge(this.changes, newChanges, this.config.combine);
let updated = changes;
// Minimize changes when possible
for (let i = 0; i < updated.length; i++) {
let change = updated[i];
if (change.fromA == change.toA || change.fromB == change.toB ||
// Only look at changes that touch newly added changed ranges
!newChanges.some(r => r.toB > change.fromB && r.fromB < change.toB))
continue;
let diff = computeDiff(this.config.doc.content, newDoc.content, change, this.config.encoder);
// Fast path: If they are completely different, don't do anything
if (diff.length == 1 && diff[0].fromB == 0 && diff[0].toB == change.toB - change.fromB)
continue;
if (updated == changes)
updated = changes.slice();
if (diff.length == 1) {
updated[i] = diff[0];
}
else {
updated.splice(i, 1, ...diff);
i += diff.length - 1;
}
}
return new ChangeSet(this.config, updated);
}
/**
The starting document of the change set.
*/
get startDoc() { return this.config.doc; }
/**
Map the span's data values in the given set through a function
and construct a new set with the resulting data.
*/
map(f) {
let mapSpan = (span) => {
let newData = f(span);
return newData === span.data ? span : new Span(span.length, newData);
};
return new ChangeSet(this.config, this.changes.map((ch) => {
return new Change(ch.fromA, ch.toA, ch.fromB, ch.toB, ch.deleted.map(mapSpan), ch.inserted.map(mapSpan));
}));
}
/**
Compare two changesets and return the range in which they are
changed, if any. If the document changed between the maps, pass
the maps for the steps that changed it as second argument, and
make sure the method is called on the old set and passed the new
set. The returned positions will be in new document coordinates.
*/
changedRange(b, maps) {
if (b == this)
return null;
let touched = maps && touchedRange(maps);
let moved = touched ? (touched.toB - touched.fromB) - (touched.toA - touched.fromA) : 0;
function map(p) {
return !touched || p <= touched.fromA ? p : p + moved;
}
let from = touched ? touched.fromB : 2e8, to = touched ? touched.toB : -2e8;
function add(start, end = start) {
from = Math.min(start, from);
to = Math.max(end, to);
}
let rA = this.changes, rB = b.changes;
for (let iA = 0, iB = 0; iA < rA.length && iB < rB.length;) {
let rangeA = rA[iA], rangeB = rB[iB];
if (rangeA && rangeB && sameRanges(rangeA, rangeB, map)) {
iA++;
iB++;
}
else if (rangeB && (!rangeA || map(rangeA.fromB) >= rangeB.fromB)) {
add(rangeB.fromB, rangeB.toB);
iB++;
}
else {
add(map(rangeA.fromB), map(rangeA.toB));
iA++;
}
}
return from <= to ? { from, to } : null;
}
/**
Create a changeset with the given base object and configuration.
The `combine` function is used to compare and combine metadata—it
should return null when metadata isn't compatible, and a combined
version for a merged range when it is.
When given, a token encoder determines how document tokens are
serialized and compared when diffing the content produced by
changes. The default is to just compare nodes by name and text
by character, ignoring marks and attributes.
To serialize a change set, you can store its document and
change array as JSON, and then pass the deserialized (via
[`Change.fromJSON`](https://prosemirror.net/docs/ref/#changes.Change^fromJSON)) set of changes
as fourth argument to `create` to recreate the set.
*/
static create(doc, combine = (a, b) => a === b ? a : null, tokenEncoder = DefaultEncoder, changes = []) {
return new ChangeSet({ combine, doc, encoder: tokenEncoder }, changes);
}
}
/**
Exported for testing @internal
*/
ChangeSet.computeDiff = computeDiff;
// Divide-and-conquer approach to merging a series of ranges.
function mergeAll(ranges, combine, start = 0, end = ranges.length) {
if (end == start + 1)
return [ranges[start]];
let mid = (start + end) >> 1;
return Change.merge(mergeAll(ranges, combine, start, mid), mergeAll(ranges, combine, mid, end), combine);
}
function endRange(maps) {
let from = 2e8, to = -2e8;
for (let i = 0; i < maps.length; i++) {
let map = maps[i];
if (from != 2e8) {
from = map.map(from, -1);
to = map.map(to, 1);
}
map.forEach((_s, _e, start, end) => {
from = Math.min(from, start);
to = Math.max(to, end);
});
}
return from == 2e8 ? null : { from, to };
}
function touchedRange(maps) {
let b = endRange(maps);
if (!b)
return null;
let a = endRange(maps.map(m => m.invert()).reverse());
return { fromA: a.from, toA: a.to, fromB: b.from, toB: b.to };
}
function sameRanges(a, b, map) {
return map(a.fromB) == b.fromB && map(a.toB) == b.toB &&
sameSpans(a.deleted, b.deleted) && sameSpans(a.inserted, b.inserted);
}
function sameSpans(a, b) {
if (a.length != b.length)
return false;
for (let i = 0; i < a.length; i++)
if (a[i].length != b[i].length || a[i].data !== b[i].data)
return false;
return true;
}
export { Change, ChangeSet, Span, simplifyChanges };

40
node_modules/prosemirror-changeset/package.json generated vendored Normal file
View File

@@ -0,0 +1,40 @@
{
"name": "prosemirror-changeset",
"version": "2.4.0",
"description": "Distills a series of editing steps into deleted and added ranges",
"type": "module",
"main": "dist/index.cjs",
"module": "dist/index.js",
"types": "dist/index.d.ts",
"exports": {
"import": "./dist/index.js",
"require": "./dist/index.cjs"
},
"sideEffects": false,
"license": "MIT",
"maintainers": [
{
"name": "Marijn Haverbeke",
"email": "marijn@haverbeke.berlin",
"web": "http://marijnhaverbeke.nl"
}
],
"repository": {
"type": "git",
"url": "git://github.com/prosemirror/prosemirror-changeset.git"
},
"dependencies": {
"prosemirror-transform": "^1.0.0"
},
"devDependencies": {
"@prosemirror/buildhelper": "^0.1.5",
"prosemirror-model": "^1.0.0",
"prosemirror-test-builder": "^1.0.0",
"builddocs": "^1.0.8"
},
"scripts": {
"test": "pm-runtests",
"prepare": "pm-buildhelper src/changeset.ts",
"build-readme": "builddocs --format markdown --main src/README.md src/changeset.ts > README.md"
}
}

32
node_modules/prosemirror-changeset/src/README.md generated vendored Normal file
View File

@@ -0,0 +1,32 @@
# prosemirror-changeset
This is a helper module that can turn a sequence of document changes
into a set of insertions and deletions, for example to display them in
a change-tracking interface. Such a set can be built up incrementally,
in order to do such change tracking in a halfway performant way during
live editing.
This code is licensed under an [MIT
licence](https://github.com/ProseMirror/prosemirror-changeset/blob/master/LICENSE).
## Programming interface
Insertions and deletions are represented as spans—ranges in the
document. The deleted spans refer to the original document, whereas
the inserted ones point into the current document.
It is possible to associate arbitrary data values with such spans, for
example to track the user that made the change, the timestamp at which
it was made, or the step data necessary to invert it again.
@Change
@Span
@ChangeSet
@simplifyChanges
@TokenEncoder
@ChangeJSON

189
node_modules/prosemirror-changeset/src/change.ts generated vendored Normal file
View File

@@ -0,0 +1,189 @@
/// Stores metadata for a part of a change.
export class Span<Data = any> {
/// @internal
constructor(
/// The length of this span.
readonly length: number,
/// The data associated with this span.
readonly data: Data
) {}
/// @internal
cut(length: number) {
return length == this.length ? this : new Span(length, this.data)
}
/// @internal
static slice(spans: readonly Span[], from: number, to: number) {
if (from == to) return Span.none
if (from == 0 && to == Span.len(spans)) return spans
let result = []
for (let i = 0, off = 0; off < to; i++) {
let span = spans[i], end = off + span.length
let overlap = Math.min(to, end) - Math.max(from, off)
if (overlap > 0) result.push(span.cut(overlap))
off = end
}
return result
}
/// @internal
static join<Data>(a: readonly Span<Data>[], b: readonly Span<Data>[], combine: (dataA: Data, dataB: Data) => Data) {
if (a.length == 0) return b
if (b.length == 0) return a
let combined = combine(a[a.length - 1].data, b[0].data)
if (combined == null) return a.concat(b)
let result = a.slice(0, a.length - 1)
result.push(new Span(a[a.length - 1].length + b[0].length, combined))
for (let i = 1; i < b.length; i++) result.push(b[i])
return result
}
/// @internal
static len(spans: readonly Span[]) {
let len = 0
for (let i = 0; i < spans.length; i++) len += spans[i].length
return len
}
/// @internal
static none: readonly Span[] = []
}
/// A replaced range with metadata associated with it.
export class Change<Data = any> {
/// @internal
constructor(
/// The start of the range deleted/replaced in the old document.
readonly fromA: number,
/// The end of the range in the old document.
readonly toA: number,
/// The start of the range inserted in the new document.
readonly fromB: number,
/// The end of the range in the new document.
readonly toB: number,
/// Data associated with the deleted content. The length of these
/// spans adds up to `this.toA - this.fromA`.
readonly deleted: readonly Span<Data>[],
/// Data associated with the inserted content. Length adds up to
/// `this.toB - this.fromB`.
readonly inserted: readonly Span<Data>[]
) {}
/// @internal
get lenA() { return this.toA - this.fromA }
/// @internal
get lenB() { return this.toB - this.fromB }
/// @internal
slice(startA: number, endA: number, startB: number, endB: number): Change<Data> {
if (startA == 0 && startB == 0 && endA == this.toA - this.fromA &&
endB == this.toB - this.fromB) return this
return new Change(this.fromA + startA, this.fromA + endA,
this.fromB + startB, this.fromB + endB,
Span.slice(this.deleted, startA, endA),
Span.slice(this.inserted, startB, endB))
}
/// This merges two changesets (the end document of x should be the
/// start document of y) into a single one spanning the start of x to
/// the end of y.
static merge<Data>(x: readonly Change<Data>[],
y: readonly Change<Data>[],
combine: (dataA: Data, dataB: Data) => Data): readonly Change<Data>[] {
if (x.length == 0) return y
if (y.length == 0) return x
let result = []
// Iterate over both sets in parallel, using the middle coordinate
// system (B in x, A in y) to synchronize.
for (let iX = 0, iY = 0, curX: Change<Data> | null = x[0], curY: Change<Data> | null = y[0];;) {
if (!curX && !curY) {
return result
} else if (curX && (!curY || curX.toB < curY.fromA)) { // curX entirely in front of curY
let off = iY ? y[iY - 1].toB - y[iY - 1].toA : 0
result.push(off == 0 ? curX :
new Change(curX.fromA, curX.toA, curX.fromB + off, curX.toB + off,
curX.deleted, curX.inserted))
curX = iX++ == x.length ? null : x[iX]
} else if (curY && (!curX || curY.toA < curX.fromB)) { // curY entirely in front of curX
let off = iX ? x[iX - 1].toB - x[iX - 1].toA : 0
result.push(off == 0 ? curY :
new Change(curY.fromA - off, curY.toA - off, curY.fromB, curY.toB,
curY.deleted, curY.inserted))
curY = iY++ == y.length ? null : y[iY]
} else { // Touch, need to merge
// The rules for merging ranges are that deletions from the
// old set and insertions from the new are kept. Areas of the
// middle document covered by a but not by b are insertions
// from a that need to be added, and areas covered by b but
// not a are deletions from b that need to be added.
let pos = Math.min(curX!.fromB, curY!.fromA)
let fromA = Math.min(curX!.fromA, curY!.fromA - (iX ? x[iX - 1].toB - x[iX - 1].toA : 0)), toA = fromA
let fromB = Math.min(curY!.fromB, curX!.fromB + (iY ? y[iY - 1].toB - y[iY - 1].toA : 0)), toB = fromB
let deleted = Span.none, inserted = Span.none
// Used to prevent appending ins/del range for the same Change twice
let enteredX = false, enteredY = false
// Need to have an inner loop since any number of further
// ranges might be touching this group
for (;;) {
let nextX = !curX ? 2e8 : pos >= curX.fromB ? curX.toB : curX.fromB
let nextY = !curY ? 2e8 : pos >= curY.fromA ? curY.toA : curY.fromA
let next = Math.min(nextX, nextY)
let inX = curX && pos >= curX.fromB, inY = curY && pos >= curY.fromA
if (!inX && !inY) break
if (inX && pos == curX!.fromB && !enteredX) {
deleted = Span.join(deleted, curX!.deleted, combine)
toA += curX!.lenA
enteredX = true
}
if (inX && !inY) {
inserted = Span.join(inserted, Span.slice(curX!.inserted, pos - curX!.fromB, next - curX!.fromB), combine)
toB += next - pos
}
if (inY && pos == curY!.fromA && !enteredY) {
inserted = Span.join(inserted, curY!.inserted, combine)
toB += curY!.lenB
enteredY = true
}
if (inY && !inX) {
deleted = Span.join(deleted, Span.slice(curY!.deleted, pos - curY!.fromA, next - curY!.fromA), combine)
toA += next - pos
}
if (inX && next == curX!.toB) {
curX = iX++ == x.length ? null : x[iX]
enteredX = false
}
if (inY && next == curY!.toA) {
curY = iY++ == y.length ? null : y[iY]
enteredY = false
}
pos = next
}
if (fromA < toA || fromB < toB)
result.push(new Change(fromA, toA, fromB, toB, deleted, inserted))
}
}
}
/// Deserialize a change from JSON format.
static fromJSON<Data>(json: ChangeJSON<Data>) {
return new Change(json.fromA, json.toA, json.fromB, json.toB,
json.deleted.map(d => new Span(d.length, d.data)),
json.inserted.map(d => new Span(d.length, d.data)))
}
/// Returns a JSON-serializeable object to represent this change.
toJSON(): ChangeJSON<Data> { return this }
}
/// JSON-serialized form of a change.
export type ChangeJSON<Data> = {
fromA: number, toA: number,
fromB: number, toB: number,
deleted: readonly {length: number, data: Data}[],
inserted: readonly {length: number, data: Data}[]
}

212
node_modules/prosemirror-changeset/src/changeset.ts generated vendored Normal file
View File

@@ -0,0 +1,212 @@
import {Node} from "prosemirror-model"
import {StepMap} from "prosemirror-transform"
import {computeDiff, TokenEncoder, DefaultEncoder} from "./diff"
import {Change, Span, ChangeJSON} from "./change"
export {Change, Span, ChangeJSON}
export {simplifyChanges} from "./simplify"
export {TokenEncoder}
/// A change set tracks the changes to a document from a given point
/// in the past. It condenses a number of step maps down to a flat
/// sequence of replacements, and simplifies replacments that
/// partially undo themselves by comparing their content.
export class ChangeSet<Data = any> {
/// @internal
constructor(
/// @internal
readonly config: {
doc: Node,
combine: (dataA: Data, dataB: Data) => Data,
encoder: TokenEncoder<any>
},
/// Replaced regions.
readonly changes: readonly Change<Data>[]
) {}
/// Computes a new changeset by adding the given step maps and
/// metadata (either as an array, per-map, or as a single value to be
/// associated with all maps) to the current set. Will not mutate the
/// old set.
///
/// Note that due to simplification that happens after each add,
/// incrementally adding steps might create a different final set
/// than adding all those changes at once, since different document
/// tokens might be matched during simplification depending on the
/// boundaries of the current changed ranges.
addSteps(newDoc: Node, maps: readonly StepMap[], data: Data | readonly Data[]): ChangeSet<Data> {
// This works by inspecting the position maps for the changes,
// which indicate what parts of the document were replaced by new
// content, and the size of that new content. It uses these to
// build up Change objects.
//
// These change objects are put in sets and merged together using
// Change.merge, giving us the changes created by the new steps.
// Those changes can then be merged with the existing set of
// changes.
//
// For each change that was touched by the new steps, we recompute
// a diff to try to minimize the change by dropping matching
// pieces of the old and new document from the change.
let stepChanges: Change<Data>[] = []
// Add spans for new steps.
for (let i = 0; i < maps.length; i++) {
let d = Array.isArray(data) ? data[i] : data
let off = 0
maps[i].forEach((fromA, toA, fromB, toB) => {
stepChanges.push(new Change(fromA + off, toA + off, fromB, toB,
fromA == toA ? Span.none : [new Span(toA - fromA, d)],
fromB == toB ? Span.none : [new Span(toB - fromB, d)]))
off = (toB - fromB) - (toA - fromA)
})
}
if (stepChanges.length == 0) return this
let newChanges = mergeAll(stepChanges, this.config.combine)
let changes = Change.merge(this.changes, newChanges, this.config.combine)
let updated: Change<Data>[] = changes as Change<Data>[]
// Minimize changes when possible
for (let i = 0; i < updated.length; i++) {
let change = updated[i]
if (change.fromA == change.toA || change.fromB == change.toB ||
// Only look at changes that touch newly added changed ranges
!newChanges.some(r => r.toB > change.fromB && r.fromB < change.toB)) continue
let diff = computeDiff(this.config.doc.content, newDoc.content, change, this.config.encoder)
// Fast path: If they are completely different, don't do anything
if (diff.length == 1 && diff[0].fromB == 0 && diff[0].toB == change.toB - change.fromB)
continue
if (updated == changes) updated = changes.slice()
if (diff.length == 1) {
updated[i] = diff[0]
} else {
updated.splice(i, 1, ...diff)
i += diff.length - 1
}
}
return new ChangeSet(this.config, updated)
}
/// The starting document of the change set.
get startDoc(): Node { return this.config.doc }
/// Map the span's data values in the given set through a function
/// and construct a new set with the resulting data.
map(f: (range: Span<Data>) => Data): ChangeSet<Data> {
let mapSpan = (span: Span<Data>) => {
let newData = f(span)
return newData === span.data ? span : new Span(span.length, newData)
}
return new ChangeSet(this.config, this.changes.map((ch: Change<Data>) => {
return new Change(ch.fromA, ch.toA, ch.fromB, ch.toB, ch.deleted.map(mapSpan), ch.inserted.map(mapSpan))
}))
}
/// Compare two changesets and return the range in which they are
/// changed, if any. If the document changed between the maps, pass
/// the maps for the steps that changed it as second argument, and
/// make sure the method is called on the old set and passed the new
/// set. The returned positions will be in new document coordinates.
changedRange(b: ChangeSet, maps?: readonly StepMap[]): {from: number, to: number} | null {
if (b == this) return null
let touched = maps && touchedRange(maps)
let moved = touched ? (touched.toB - touched.fromB) - (touched.toA - touched.fromA) : 0
function map(p: number) {
return !touched || p <= touched.fromA ? p : p + moved
}
let from = touched ? touched.fromB : 2e8, to = touched ? touched.toB : -2e8
function add(start: number, end = start) {
from = Math.min(start, from); to = Math.max(end, to)
}
let rA = this.changes, rB = b.changes
for (let iA = 0, iB = 0; iA < rA.length && iB < rB.length;) {
let rangeA = rA[iA], rangeB = rB[iB]
if (rangeA && rangeB && sameRanges(rangeA, rangeB, map)) { iA++; iB++ }
else if (rangeB && (!rangeA || map(rangeA.fromB) >= rangeB.fromB)) { add(rangeB.fromB, rangeB.toB); iB++ }
else { add(map(rangeA.fromB), map(rangeA.toB)); iA++ }
}
return from <= to ? {from, to} : null
}
/// Create a changeset with the given base object and configuration.
///
/// The `combine` function is used to compare and combine metadata—it
/// should return null when metadata isn't compatible, and a combined
/// version for a merged range when it is.
///
/// When given, a token encoder determines how document tokens are
/// serialized and compared when diffing the content produced by
/// changes. The default is to just compare nodes by name and text
/// by character, ignoring marks and attributes.
///
/// To serialize a change set, you can store its document and
/// change array as JSON, and then pass the deserialized (via
/// [`Change.fromJSON`](#changes.Change^fromJSON)) set of changes
/// as fourth argument to `create` to recreate the set.
static create<Data = any>(
doc: Node,
combine: (dataA: Data, dataB: Data) => Data = (a, b) => a === b ? a : null as any,
tokenEncoder: TokenEncoder<any> = DefaultEncoder,
changes: readonly Change<Data>[] = []
) {
return new ChangeSet({combine, doc, encoder: tokenEncoder}, changes)
}
/// Exported for testing @internal
static computeDiff = computeDiff
}
// Divide-and-conquer approach to merging a series of ranges.
function mergeAll<Data>(
ranges: readonly Change<Data>[],
combine: (dA: Data, dB: Data) => Data,
start = 0, end = ranges.length
): readonly Change<Data>[] {
if (end == start + 1) return [ranges[start]]
let mid = (start + end) >> 1
return Change.merge(mergeAll(ranges, combine, start, mid),
mergeAll(ranges, combine, mid, end), combine)
}
function endRange(maps: readonly StepMap[]) {
let from = 2e8, to = -2e8
for (let i = 0; i < maps.length; i++) {
let map = maps[i]
if (from != 2e8) {
from = map.map(from, -1)
to = map.map(to, 1)
}
map.forEach((_s, _e, start, end) => {
from = Math.min(from, start)
to = Math.max(to, end)
})
}
return from == 2e8 ? null : {from, to}
}
function touchedRange(maps: readonly StepMap[]) {
let b = endRange(maps)
if (!b) return null
let a = endRange(maps.map(m => m.invert()).reverse())!
return {fromA: a.from, toA: a.to, fromB: b.from, toB: b.to}
}
function sameRanges<Data>(a: Change<Data>, b: Change<Data>, map: (pos: number) => number) {
return map(a.fromB) == b.fromB && map(a.toB) == b.toB &&
sameSpans(a.deleted, b.deleted) && sameSpans(a.inserted, b.inserted)
}
function sameSpans<Data>(a: readonly Span<Data>[], b: readonly Span<Data>[]) {
if (a.length != b.length) return false
for (let i = 0; i < a.length; i++)
if (a[i].length != b[i].length || a[i].data !== b[i].data) return false
return true
}

151
node_modules/prosemirror-changeset/src/diff.ts generated vendored Normal file
View File

@@ -0,0 +1,151 @@
import {Fragment, Node, NodeType, Mark} from "prosemirror-model"
import {Change} from "./change"
/// A token encoder can be passed when creating a `ChangeSet` in order
/// to influence the way the library runs its diffing algorithm. The
/// encoder determines how document tokens (such as nodes and
/// characters) are encoded and compared.
///
/// Note that both the encoding and the comparison may run a lot, and
/// doing non-trivial work in these functions could impact
/// performance.
export interface TokenEncoder<T> {
/// Encode a given character, with the given marks applied.
encodeCharacter(char: number, marks: readonly Mark[]): T
/// Encode the start of a node or, if this is a leaf node, the
/// entire node.
encodeNodeStart(node: Node): T
/// Encode the end token for the given node. It is valid to encode
/// every end token in the same way.
encodeNodeEnd(node: Node): T
/// Compare the given tokens. Should return true when they count as
/// equal.
compareTokens(a: T, b: T): boolean
}
function typeID(type: NodeType) {
let cache: Record<string, number> = type.schema.cached.changeSetIDs || (type.schema.cached.changeSetIDs = Object.create(null))
let id = cache[type.name]
if (id == null) cache[type.name] = id = Object.keys(type.schema.nodes).indexOf(type.name) + 1
return id
}
// The default token encoder, which encodes node open tokens are
// encoded as strings holding the node name, characters as their
// character code, and node close tokens as negative numbers.
export const DefaultEncoder: TokenEncoder<number | string> = {
encodeCharacter: char => char,
encodeNodeStart: node => node.type.name,
encodeNodeEnd: node => -typeID(node.type),
compareTokens: (a, b) => a === b
}
// Convert the given range of a fragment to tokens.
function tokens<T>(frag: Fragment, encoder: TokenEncoder<T>, start: number, end: number, target: T[]) {
for (let i = 0, off = 0; i < frag.childCount; i++) {
let child = frag.child(i), endOff = off + child.nodeSize
let from = Math.max(off, start), to = Math.min(endOff, end)
if (from < to) {
if (child.isText) {
for (let j = from; j < to; j++) target.push(encoder.encodeCharacter(child.text!.charCodeAt(j - off), child.marks))
} else if (child.isLeaf) {
target.push(encoder.encodeNodeStart(child))
} else {
if (from == off) target.push(encoder.encodeNodeStart(child))
tokens(child.content, encoder, Math.max(off + 1, from) - off - 1, Math.min(endOff - 1, to) - off - 1, target)
if (to == endOff) target.push(encoder.encodeNodeEnd(child))
}
}
off = endOff
}
return target
}
// The code below will refuse to compute a diff with more than 5000
// insertions or deletions, which takes about 300ms to reach on my
// machine. This is a safeguard against runaway computations.
const MAX_DIFF_SIZE = 5000
// This obscure mess of constants computes the minimum length of an
// unchanged range (not at the start/end of the compared content). The
// idea is to make it higher in bigger replacements, so that you don't
// get a diff soup of coincidentally identical letters when replacing
// a paragraph.
function minUnchanged(sizeA: number, sizeB: number) {
return Math.min(15, Math.max(2, Math.floor(Math.max(sizeA, sizeB) / 10)))
}
export function computeDiff(fragA: Fragment, fragB: Fragment, range: Change, encoder: TokenEncoder<any> = DefaultEncoder) {
let tokA = tokens(fragA, encoder, range.fromA, range.toA, [])
let tokB = tokens(fragB, encoder, range.fromB, range.toB, [])
// Scan from both sides to cheaply eliminate work
let start = 0, endA = tokA.length, endB = tokB.length
let cmp = encoder.compareTokens
while (start < tokA.length && start < tokB.length && cmp(tokA[start], tokB[start])) start++
if (start == tokA.length && start == tokB.length) return []
while (endA > start && endB > start && cmp(tokA[endA - 1], tokB[endB - 1])) endA--, endB--
// If the result is simple _or_ too big to cheaply compute, return
// the remaining region as the diff
if (endA == start || endB == start || (endA == endB && endA == start + 1))
return [range.slice(start, endA, start, endB)]
// This is an implementation of Myers' diff algorithm
// See https://neil.fraser.name/writing/diff/myers.pdf and
// https://blog.jcoglan.com/2017/02/12/the-myers-diff-algorithm-part-1/
let lenA = endA - start, lenB = endB - start
let max = Math.min(MAX_DIFF_SIZE, lenA + lenB), off = max + 1
let history: number[][] = []
let frontier: number[] = []
for (let len = off * 2, i = 0; i < len; i++) frontier[i] = -1
for (let size = 0; size <= max; size++) {
for (let diag = -size; diag <= size; diag += 2) {
let next = frontier[diag + 1 + max], prev = frontier[diag - 1 + max]
let x = next < prev ? prev : next + 1, y = x + diag
while (x < lenA && y < lenB && cmp(tokA[start + x], tokB[start + y])) x++, y++
frontier[diag + max] = x
// Found a match
if (x >= lenA && y >= lenB) {
// Trace back through the history to build up a set of changed ranges.
let diff = [], minSpan = minUnchanged(endA - start, endB - start)
// Used to add steps to a diff one at a time, back to front, merging
// ones that are less than minSpan tokens apart
let fromA = -1, toA = -1, fromB = -1, toB = -1
let add = (fA: number, tA: number, fB: number, tB: number) => {
if (fromA > -1 && fromA < tA + minSpan) {
fromA = fA; fromB = fB
} else {
if (fromA > -1)
diff.push(range.slice(fromA, toA, fromB, toB))
fromA = fA; toA = tA
fromB = fB; toB = tB
}
}
for (let i = size - 1; i >= 0; i--) {
let next = frontier[diag + 1 + max], prev = frontier[diag - 1 + max]
if (next < prev) { // Deletion
diag--
x = prev + start; y = x + diag
add(x, x, y, y + 1)
} else { // Insertion
diag++
x = next + start; y = x + diag
add(x, x + 1, y, y)
}
frontier = history[i >> 1]
}
if (fromA > -1) diff.push(range.slice(fromA, toA, fromB, toB))
return diff.reverse()
}
}
// Since only either odd or even diagonals are read from each
// frontier, we only copy them every other iteration.
if (size % 2 == 0) history.push(frontier.slice())
}
// The loop exited, meaning the maximum amount of work was done.
// Just return a change spanning the entire range.
return [range.slice(start, endA, start, endB)]
}

132
node_modules/prosemirror-changeset/src/simplify.ts generated vendored Normal file
View File

@@ -0,0 +1,132 @@
import {Fragment, Node} from "prosemirror-model"
import {Span, Change} from "./change"
let letter: RegExp | undefined
// If the runtime support unicode properties in regexps, that's a good
// source of info on whether something is a letter.
try { letter = new RegExp("[\\p{Alphabetic}_]", "u") } catch(_) {}
// Otherwise, we see if the character changes when upper/lowercased,
// or if it is part of these common single-case scripts.
const nonASCIISingleCaseWordChar = /[\u00df\u0587\u0590-\u05f4\u0600-\u06ff\u3040-\u309f\u30a0-\u30ff\u3400-\u4db5\u4e00-\u9fcc\uac00-\ud7af]/
function isLetter(code: number) {
if (code < 128)
return code >= 48 && code <= 57 || code >= 65 && code <= 90 || code >= 79 && code <= 122
let ch = String.fromCharCode(code)
if (letter) return letter.test(ch)
return ch.toUpperCase() != ch.toLowerCase() || nonASCIISingleCaseWordChar.test(ch)
}
// Convert a range of document into a string, so that we can easily
// access characters at a given position. Treat non-text tokens as
// spaces so that they aren't considered part of a word.
function getText(frag: Fragment, start: number, end: number) {
let out = ""
function convert(frag: Fragment, start: number, end: number) {
for (let i = 0, off = 0; i < frag.childCount; i++) {
let child = frag.child(i), endOff = off + child.nodeSize
let from = Math.max(off, start), to = Math.min(endOff, end)
if (from < to) {
if (child.isText) {
out += child.text!.slice(Math.max(0, start - off), Math.min(child.text!.length, end - off))
} else if (child.isLeaf) {
out += " "
} else {
if (from == off) out += " "
convert(child.content, Math.max(0, from - off - 1), Math.min(child.content.size, end - off))
if (to == endOff) out += " "
}
}
off = endOff
}
}
convert(frag, start, end)
return out
}
// The distance changes have to be apart for us to not consider them
// candidates for merging.
const MAX_SIMPLIFY_DISTANCE = 30
/// Simplifies a set of changes for presentation. This makes the
/// assumption that having both insertions and deletions within a word
/// is confusing, and, when such changes occur without a word boundary
/// between them, they should be expanded to cover the entire set of
/// words (in the new document) they touch. An exception is made for
/// single-character replacements.
export function simplifyChanges(changes: readonly Change[], doc: Node) {
let result: Change[] = []
for (let i = 0; i < changes.length; i++) {
let end = changes[i].toB, start = i
while (i < changes.length - 1 && changes[i + 1].fromB <= end + MAX_SIMPLIFY_DISTANCE)
end = changes[++i].toB
simplifyAdjacentChanges(changes, start, i + 1, doc, result)
}
return result
}
function simplifyAdjacentChanges(changes: readonly Change[], from: number, to: number, doc: Node, target: Change[]) {
let start = Math.max(0, changes[from].fromB - MAX_SIMPLIFY_DISTANCE)
let end = Math.min(doc.content.size, changes[to - 1].toB + MAX_SIMPLIFY_DISTANCE)
let text = getText(doc.content, start, end)
for (let i = from; i < to; i++) {
let startI = i, last = changes[i], deleted = last.lenA, inserted = last.lenB
while (i < to - 1) {
let next = changes[i + 1], boundary = false
let prevLetter = last.toB == end ? false : isLetter(text.charCodeAt(last.toB - 1 - start))
for (let pos = last.toB; !boundary && pos < next.fromB; pos++) {
let nextLetter = pos == end ? false : isLetter(text.charCodeAt(pos - start))
if ((!prevLetter || !nextLetter) && pos != changes[startI].fromB) boundary = true
prevLetter = nextLetter
}
if (boundary) break
deleted += next.lenA; inserted += next.lenB
last = next
i++
}
if (inserted > 0 && deleted > 0 && !(inserted == 1 && deleted == 1)) {
let from = changes[startI].fromB, to = changes[i].toB
if (from < end && isLetter(text.charCodeAt(from - start)))
while (from > start && isLetter(text.charCodeAt(from - 1 - start))) from--
if (to > start && isLetter(text.charCodeAt(to - 1 - start)))
while (to < end && isLetter(text.charCodeAt(to - start))) to++
let joined = fillChange(changes.slice(startI, i + 1), from, to)
let last = target.length ? target[target.length - 1] : null
if (last && last.toA == joined.fromA)
target[target.length - 1] = new Change(last.fromA, joined.toA, last.fromB, joined.toB,
last.deleted.concat(joined.deleted), last.inserted.concat(joined.inserted))
else
target.push(joined)
} else {
for (let j = startI; j <= i; j++) target.push(changes[j])
}
}
return changes
}
function combine<T>(a: T, b: T): T { return a === b ? a : null as any }
function fillChange(changes: readonly Change[], fromB: number, toB: number) {
let fromA = changes[0].fromA - (changes[0].fromB - fromB)
let last = changes[changes.length - 1]
let toA = last.toA + (toB - last.toB)
let deleted = Span.none, inserted = Span.none
let delData = (changes[0].deleted.length ? changes[0].deleted : changes[0].inserted)[0].data
let insData = (changes[0].inserted.length ? changes[0].inserted : changes[0].deleted)[0].data
for (let posA = fromA, posB = fromB, i = 0;; i++) {
let next = i == changes.length ? null : changes[i]
let endA = next ? next.fromA : toA, endB = next ? next.fromB : toB
if (endA > posA) deleted = Span.join(deleted, [new Span(endA - posA, delData)], combine)
if (endB > posB) inserted = Span.join(inserted, [new Span(endB - posB, insData)], combine)
if (!next) break
deleted = Span.join(deleted, next.deleted, combine)
inserted = Span.join(inserted, next.inserted, combine)
if (deleted.length) delData = deleted[deleted.length - 1].data
if (inserted.length) insData = inserted[inserted.length - 1].data
posA = next.toA; posB = next.toB
}
return new Change(fromA, toA, fromB, toB, deleted, inserted)
}

View File

@@ -0,0 +1,48 @@
import ist from "ist"
import {schema, doc, p} from "prosemirror-test-builder"
import {Transform} from "prosemirror-transform"
import {Node} from "prosemirror-model"
import {ChangeSet} from "prosemirror-changeset"
function mk(doc: Node, change: (tr: Transform) => Transform): {doc0: Node, tr: Transform, data: any[], set0: ChangeSet, set: ChangeSet} {
let tr = change(new Transform(doc))
let data = new Array(tr.steps.length).fill("a")
let set0 = ChangeSet.create(doc)
return {doc0: doc, tr, data, set0,
set: set0.addSteps(tr.doc, tr.mapping.maps, data)}
}
function same(a: any, b: any) {
ist(JSON.stringify(a), JSON.stringify(b))
}
describe("ChangeSet.changedRange", () => {
it("returns null for identical sets", () => {
let {set, doc0, tr, data} = mk(doc(p("foo")), tr => tr
.replaceWith(2, 3, schema.text("aaaa"))
.replaceWith(1, 1, schema.text("xx"))
.delete(5, 7))
ist(set.changedRange(set), null)
ist(set.changedRange(ChangeSet.create(doc0).addSteps(tr.doc, tr.mapping.maps, data)), null)
})
it("returns only the changed range in simple cases", () => {
let {set0, set, tr} = mk(doc(p("abcd")), tr => tr.replaceWith(2, 4, schema.text("u")))
same(set0.changedRange(set, tr.mapping.maps), {from: 2, to: 3})
})
it("expands to cover updated spans", () => {
let {doc0, set0, set, tr} = mk(doc(p("abcd")), tr => tr
.replaceWith(2, 2, schema.text("c"))
.delete(3, 5))
let set1 = ChangeSet.create(doc0).addSteps(tr.docs[1], [tr.mapping.maps[0]], ["a"])
same(set0.changedRange(set1, [tr.mapping.maps[0]]), {from: 2, to: 3})
same(set1.changedRange(set, [tr.mapping.maps[1]]), {from: 2, to: 3})
})
it("detects changes in deletions", () => {
let {set} = mk(doc(p("abc")), tr => tr.delete(2, 3))
same(set.changedRange(set.map(() => "b")), {from: 2, to: 2})
})
})

211
node_modules/prosemirror-changeset/test/test-changes.ts generated vendored Normal file
View File

@@ -0,0 +1,211 @@
import ist from "ist"
import {schema, doc, p, blockquote, h1} from "prosemirror-test-builder"
import {Transform} from "prosemirror-transform"
import {Node} from "prosemirror-model"
import {ChangeSet} from "prosemirror-changeset"
describe("ChangeSet", () => {
it("finds a single insertion",
find(doc(p("hello")), tr => tr.insert(3, t("XY")), [[3, 3, 3, 5]]))
it("finds a single deletion",
find(doc(p("hello")), tr => tr.delete(3, 5), [[3, 5, 3, 3]]))
it("identifies a replacement",
find(doc(p("hello")), tr => tr.replaceWith(3, 5, t("juj")),
[[3, 5, 3, 6]]))
it("merges adjacent canceling edits",
find(doc(p("hello")),
tr => tr.delete(3, 5).insert(3, t("ll")),
[]))
it("doesn't crash when cancelling edits are followed by others",
find(doc(p("hello")),
tr => tr.delete(2, 3).insert(2, t("e")).delete(5, 6),
[[5, 6, 5, 5]]))
it("stops handling an inserted span after collapsing it",
find(doc(p("abcba")), tr => tr.insert(2, t("b")).insert(6, t("b")).delete(3, 6),
[[3, 4, 3, 3]]))
it("partially merges insert at start",
find(doc(p("helLo")), tr => tr.delete(3, 5).insert(3, t("l")),
[[4, 5, 4, 4]]))
it("partially merges insert at end",
find(doc(p("helLo")), tr => tr.delete(3, 5).insert(3, t("L")),
[[3, 4, 3, 3]]))
it("partially merges delete at start",
find(doc(p("abc")), tr => tr.insert(3, t("xyz")).delete(3, 4),
[[3, 3, 3, 5]]))
it("partially merges delete at end",
find(doc(p("abc")), tr => tr.insert(3, t("xyz")).delete(5, 6),
[[3, 3, 3, 5]]))
it("finds multiple insertions",
find(doc(p("abc")), tr => tr.insert(1, t("x")).insert(5, t("y")),
[[1, 1, 1, 2], [4, 4, 5, 6]]))
it("finds multiple deletions",
find(doc(p("xyz")), tr => tr.delete(1, 2).delete(2, 3),
[[1, 2, 1, 1], [3, 4, 2, 2]]))
it("identifies a deletion between insertions",
find(doc(p("zyz")), tr => tr.insert(2, t("A")).insert(4, t("B")).delete(3, 4),
[[2, 3, 2, 4]]))
it("can add a deletion in a new addStep call", find(doc(p("hello")), [
tr => tr.delete(1, 2),
tr => tr.delete(2, 3)
], [[1, 2, 1, 1], [3, 4, 2, 2]]))
it("merges delete/insert from different addStep calls", find(doc(p("hello")), [
tr => tr.delete(3, 5),
tr => tr.insert(3, t("ll"))
], []))
it("revert a deletion by inserting the character again", find(doc(p("bar")), [
tr => tr.delete(2, 3), // br
tr => tr.insert(2, t("x")), // bxr
tr => tr.insert(2, t("a")) // baxr
], [[3, 3, 3, 4]]))
it("insert character before changed character", find(doc(p("bar")), [
tr => tr.delete(2, 3), // br
tr => tr.insert(2, t("x")), // bxr
tr => tr.insert(2, t("x")) // bxxr
], [[2, 3, 2, 4]]))
it("partially merges delete/insert from different addStep calls", find(doc(p("heljo")), [
tr => tr.delete(3, 5),
tr => tr.insert(3, t("ll"))
], [[4, 5, 4, 5]]))
it("merges insert/delete from different addStep calls", find(doc(p("ok")), [
tr => tr.insert(2, t("--")),
tr => tr.delete(2, 4)
], []))
it("partially merges insert/delete from different addStep calls", find(doc(p("ok")), [
tr => tr.insert(2, t("--")),
tr => tr.delete(2, 3)
], [[2, 2, 2, 3]]))
it("maps deletions forward", find(doc(p("foobar")), [
tr => tr.delete(5, 6),
tr => tr.insert(1, t("OKAY"))
], [[1, 1, 1, 5], [5, 6, 9, 9]]))
it("can incrementally undo then redo", find(doc(p("bar")), [
tr => tr.delete(2, 3),
tr => tr.insert(2, t("a")),
tr => tr.delete(2, 3)
], [[2, 3, 2, 2]]))
it("can map through complicated changesets", find(doc(p("12345678901234")), [
tr => tr.delete(9, 12).insert(6, t("xyz")).replaceWith(2, 3, t("uv")),
tr => tr.delete(14, 15).insert(13, t("90")).delete(8, 9)
], [[2, 3, 2, 4], [6, 6, 7, 9], [11, 12, 14, 14], [13, 14, 15, 15]]))
it("computes a proper diff of the changes",
find(doc(p("abcd"), p("efgh")), tr => tr.delete(2, 10).insert(2, t("cdef")),
[[2, 3, 2, 2], [5, 7, 4, 4], [9, 10, 6, 6]]))
it("handles re-adding content step by step", find(doc(p("one two three")), [
tr => tr.delete(1, 14),
tr => tr.insert(1, t("two")),
tr => tr.insert(4, t(" ")),
tr => tr.insert(5, t("three"))
], [[1, 5, 1, 1]]))
it("doesn't get confused by split deletions", find(doc(blockquote(h1("one"), p("two four"))), [
tr => tr.delete(7, 11),
tr => tr.replaceWith(0, 13, blockquote(h1("one"), p("four")))
], [[7, 11, 7, 7, [[4, 0]], []]], true))
it("doesn't get confused by multiply split deletions", find(doc(blockquote(h1("one"), p("two three"))), [
tr => tr.delete(14, 16),
tr => tr.delete(7, 11),
tr => tr.delete(3, 5),
tr => tr.replaceWith(0, 10, blockquote(h1("o"), p("thr")))
], [[3, 5, 3, 3, [[2, 2]], []], [8, 12, 6, 6, [[3, 1], [1, 3]], []],
[14, 16, 8, 8, [[2, 0]], []]], true))
it("won't lose the order of overlapping changes", find(doc(p("12345")), [
tr => tr.delete(4, 5),
tr => tr.replaceWith(2, 2, t("a")),
tr => tr.delete(1, 6),
tr => tr.replaceWith(1, 1, t("1a235"))
], [[2, 2, 2, 3, [], [[1, 1]]], [4, 5, 5, 5, [[1, 0]], []]], [0, 0, 1, 1]))
it("properly maps deleted positions", find(doc(p("jTKqvPrzApX")), [
tr => tr.delete(8, 11),
tr => tr.replaceWith(1, 1, t("MPu")),
tr => tr.delete(2, 12),
tr => tr.replaceWith(2, 2, t("PujTKqvPrX"))
], [[1, 1, 1, 4, [], [[3, 2]]], [8, 11, 11, 11, [[3, 1]], []]], [1, 2, 2, 2]))
it("fuzz issue 1", find(doc(p("hzwiKqBPzn")), [
tr => tr.delete(3, 7),
tr => tr.replaceWith(5, 5, t("LH")),
tr => tr.replaceWith(6, 6, t("uE")),
tr => tr.delete(1, 6),
tr => tr.delete(3, 6)
], [[1, 11, 1, 3, [[2, 1], [4, 0], [2, 1], [2, 0]], [[2, 0]]]], [0, 1, 0, 1, 0]))
it("fuzz issue 2", find(doc(p("eAMISWgauf")), [
tr => tr.delete(5, 10),
tr => tr.replaceWith(5, 5, t("KkM")),
tr => tr.replaceWith(3, 3, t("UDO")),
tr => tr.delete(1, 12),
tr => tr.replaceWith(1, 1, t("eAUDOMIKkMf")),
tr => tr.delete(5, 8),
tr => tr.replaceWith(3, 3, t("qX"))
], [[3, 10, 3, 10, [[2, 0], [5, 2]], [[7, 0]]]], [2, 0, 0, 0, 0, 0, 0]))
it("fuzz issue 3", find(doc(p("hfxjahnOuH")), [
tr => tr.delete(1, 5),
tr => tr.replaceWith(3, 3, t("X")),
tr => tr.delete(1, 8),
tr => tr.replaceWith(1, 1, t("ahXnOuH")),
tr => tr.delete(2, 4),
tr => tr.replaceWith(2, 2, t("tn")),
tr => tr.delete(5, 7),
tr => tr.delete(1, 6),
tr => tr.replaceWith(1, 1, t("atnnH")),
tr => tr.delete(2, 6)
], [[1, 11, 1, 2, [[4, 1], [1, 0], [1, 1], [1, 0], [2, 1], [1, 0]], [[1, 0]]]], [1, 0, 1, 1, 1, 1, 1, 0, 0, 0]))
it("correctly handles steps with multiple map entries", find(doc(p()), [
tr => tr.replaceWith(1, 1, t("ab")),
tr => tr.wrap(tr.doc.resolve(1).blockRange()!, [{type: schema.nodes.blockquote}])
], [[0, 0, 0, 1], [1, 1, 2, 4], [2, 2, 5, 6]]))
})
function find(doc: Node, build: ((tr: Transform) => void) | ((tr: Transform) => void)[],
changes: any[], sep?: number[] | boolean) {
return () => {
let set = ChangeSet.create(doc), curDoc = doc
if (!Array.isArray(build)) build = [build]
build.forEach((build, i) => {
let tr = new Transform(curDoc)
build(tr)
set = set.addSteps(tr.doc, tr.mapping.maps, !sep ? 0 : Array.isArray(sep) ? sep[i] : i)
curDoc = tr.doc
})
let owner = sep && changes.length && changes[0].length > 4
ist(JSON.stringify(set.changes.map(ch => {
let range: any[] = [ch.fromA, ch.toA, ch.fromB, ch.toB]
if (owner) range.push(ch.deleted.map(d => [d.length, d.data]),
ch.inserted.map(d => [d.length, d.data]))
return range
})), JSON.stringify(changes))
}
}
function t(str: string) { return schema.text(str) }

69
node_modules/prosemirror-changeset/test/test-diff.ts generated vendored Normal file
View File

@@ -0,0 +1,69 @@
import ist from "ist"
import {doc, p, em, strong, h1, h2} from "prosemirror-test-builder"
import {Node} from "prosemirror-model"
import {Span, Change, ChangeSet} from "prosemirror-changeset"
const {computeDiff} = ChangeSet
describe("computeDiff", () => {
function test(doc1: Node, doc2: Node, ...ranges: number[][]) {
let diff = computeDiff(doc1.content, doc2.content,
new Change(0, doc1.content.size, 0, doc2.content.size,
[new Span(doc1.content.size, 0)],
[new Span(doc2.content.size, 0)]))
ist(JSON.stringify(diff.map(r => [r.fromA, r.toA, r.fromB, r.toB])), JSON.stringify(ranges))
}
it("returns an empty diff for identical documents", () =>
test(doc(p("foo"), p("bar")), doc(p("foo"), p("bar"))))
it("finds single-letter changes", () =>
test(doc(p("foo"), p("bar")), doc(p("foa"), p("bar")),
[3, 4, 3, 4]))
it("finds simple structure changes", () =>
test(doc(p("foo"), p("bar")), doc(p("foobar")),
[4, 6, 4, 4]))
it("finds multiple changes", () =>
test(doc(p("foo"), p("---bar")), doc(p("fgo"), p("---bur")),
[2, 4, 2, 4], [10, 11, 10, 11]))
it("ignores single-letter unchanged parts", () =>
test(doc(p("abcdef")), doc(p("axydzf")), [2, 6, 2, 6]))
it("ignores matching substrings in longer diffs", () =>
test(doc(p("One two three")), doc(p("One"), p("And another long paragraph that has wo and ee in it")),
[4, 14, 4, 57]))
it("finds deletions", () =>
test(doc(p("abc"), p("def")), doc(p("ac"), p("d")),
[2, 3, 2, 2], [7, 9, 6, 6]))
it("ignores marks", () =>
test(doc(p("abc")), doc(p(em("a"), strong("bc")))))
it("ignores marks in diffing", () =>
test(doc(p("abcdefghi")), doc(p(em("x"), strong("bc"), "defgh", em("y"))),
[1, 2, 1, 2], [9, 10, 9, 10]))
it("ignores attributes", () =>
test(doc(h1("x")), doc(h2("x"))))
it("finds huge deletions", () => {
let xs = "x".repeat(200), bs = "b".repeat(20)
test(doc(p("a" + bs + "c")), doc(p("a" + xs + bs + xs + "c")),
[2, 2, 2, 202], [22, 22, 222, 422])
})
it("finds huge insertions", () => {
let xs = "x".repeat(200), bs = "b".repeat(20)
test(doc(p("a" + xs + bs + xs + "c")), doc(p("a" + bs + "c")),
[2, 202, 2, 2], [222, 422, 22, 22])
})
it("can handle ambiguous diffs", () =>
test(doc(p("abcbcd")), doc(p("abcd")), [4, 6, 4, 4]))
it("sees the difference between different closing tokens", () =>
test(doc(p("a")), doc(h1("oo")), [0, 3, 0, 4]))
})

56
node_modules/prosemirror-changeset/test/test-merge.ts generated vendored Normal file
View File

@@ -0,0 +1,56 @@
import ist from "ist"
import {Change, Span} from "prosemirror-changeset"
describe("mergeChanges", () => {
it("can merge simple insertions", () => test(
[[1, 1, 1, 2]], [[1, 1, 1, 2]], [[1, 1, 1, 3]]
))
it("can merge simple deletions", () => test(
[[1, 2, 1, 1]], [[1, 2, 1, 1]], [[1, 3, 1, 1]]
))
it("can merge insertion before deletion", () => test(
[[2, 3, 2, 2]], [[1, 1, 1, 2]], [[1, 1, 1, 2], [2, 3, 3, 3]]
))
it("can merge insertion after deletion", () => test(
[[2, 3, 2, 2]], [[2, 2, 2, 3]], [[2, 3, 2, 3]]
))
it("can merge deletion before insertion", () => test(
[[2, 2, 2, 3]], [[1, 2, 1, 1]], [[1, 2, 1, 2]]
))
it("can merge deletion after insertion", () => test(
[[2, 2, 2, 3]], [[3, 4, 3, 3]], [[2, 3, 2, 3]]
))
it("can merge deletion of insertion", () => test(
[[2, 2, 2, 3]], [[2, 3, 2, 2]], []
))
it("can merge insertion after replace", () => test(
[[2, 3, 2, 3]], [[3, 3, 3, 4]], [[2, 3, 2, 4]]
))
it("can merge insertion before replace", () => test(
[[2, 3, 2, 3]], [[2, 2, 2, 3]], [[2, 3, 2, 4]]
))
it("can merge replace after insert", () => test(
[[2, 2, 2, 3]], [[2, 3, 2, 3]], [[2, 2, 2, 3]]
))
})
function range(array: number[], author = 0) {
let [fromA, toA] = array
let [fromB, toB] = array.length > 2 ? array.slice(2) : array
return new Change(fromA, toA, fromB, toB, [new Span(toA - fromA, author)], [new Span(toB - fromB, author)])
}
function test(changeA: number[][], changeB: number[][], expected: number[][]) {
const result = Change.merge(changeA.map(range), changeB.map(range), a => a)
.map(r => [r.fromA, r.toA, r.fromB, r.toB])
ist(JSON.stringify(result), JSON.stringify(expected))
}

View File

@@ -0,0 +1,71 @@
import ist from "ist"
import {doc, p, img} from "prosemirror-test-builder"
import {Node} from "prosemirror-model"
import {simplifyChanges, Change, Span} from "prosemirror-changeset"
describe("simplifyChanges", () => {
it("doesn't change insertion-only changes", () => test(
[[1, 1, 1, 2], [2, 2, 3, 4]], doc(p("hello")), [[1, 1, 1, 2], [2, 2, 3, 4]]))
it("doesn't change deletion-only changes", () => test(
[[1, 2, 1, 1], [3, 4, 2, 2]], doc(p("hello")), [[1, 2, 1, 1], [3, 4, 2, 2]]))
it("doesn't change single-letter-replacements", () => test(
[[1, 2, 1, 2]], doc(p("hello")), [[1, 2, 1, 2]]))
it("does expand multiple-letter replacements", () => test(
[[2, 4, 2, 4]], doc(p("hello")), [[1, 6, 1, 6]]))
it("does combine changes within the same word", () => test(
[[1, 3, 1, 1], [5, 5, 3, 4]], doc(p("hello")), [[1, 7, 1, 6]]))
it("expands changes to cover full words", () => test(
[[7, 10]], doc(p("one two three four")), [[5, 14]]))
it("doesn't expand across non-word text", () => test(
[[7, 10]], doc(p("one two ----- four")), [[5, 10]]))
it("treats leaf nodes as non-words", () => test(
[[2, 3], [6, 7]], doc(p("one", img(), "two")), [[2, 3], [6, 7]]))
it("treats node boundaries as non-words", () => test(
[[2, 3], [7, 8]], doc(p("one"), p("two")), [[2, 3], [7, 8]]))
it("can merge stretches of changes", () => test(
[[2, 3], [4, 6], [8, 10], [15, 16]], doc(p("foo bar baz bug ugh")), [[1, 12], [15, 16]]))
it("handles realistic word updates", () => test(
[[8, 8, 8, 11], [10, 15, 13, 17]], doc(p("chonic condition")), [[8, 15, 8, 17]]))
it("works when after significant content", () => test(
[[63, 80, 63, 83]], doc(p("one long paragraph -----"), p("two long paragraphs ------"), p("a vote against the government")),
[[62, 81, 62, 84]]))
it("joins changes that grow together when simplifying", () => test(
[[1, 5, 1, 5], [7, 13, 7, 9], [20, 21, 16, 16]], doc(p('and his co-star')),
[[1, 13, 1, 9], [20, 21, 16, 16]]))
it("properly fills in metadata", () => {
let simple = simplifyChanges([range([2, 3], 0), range([4, 6], 1), range([8, 9, 8, 8], 2)],
doc(p("1234567890")))
ist(simple.length, 1)
ist(JSON.stringify(simple[0].deleted.map(s => [s.length, s.data])),
JSON.stringify([[3, 0], [4, 1], [4, 2]]))
ist(JSON.stringify(simple[0].inserted.map(s => [s.length, s.data])),
JSON.stringify([[3, 0], [4, 1], [3, 2]]))
})
})
function range(array: number[], author = 0) {
let [fromA, toA] = array
let [fromB, toB] = array.length > 2 ? array.slice(2) : array
return new Change(fromA, toA, fromB, toB, [new Span(toA - fromA, author)], [new Span(toB - fromB, author)])
}
function test(changes: number[][], doc: Node, result: number[][]) {
let ranges = changes.map(range)
ist(JSON.stringify(simplifyChanges(ranges, doc).map((r, i) => {
if (result[i] && result[i].length > 2) return [r.fromA, r.toA, r.fromB, r.toB]
else return [r.fromB, r.toB]
})), JSON.stringify(result))
}