Docs
Clauses
MATCH
Finds patterns in the graph. Variables are bound to matched nodes and relationships.
MATCH (n:Person {name: 'Alice'}) RETURN n MATCH (a:Person)-[:FRIEND]->(b:Person) RETURN a, b
OPTIONAL MATCH
Like MATCH, but returns null for unmatched variables instead of eliminating the row.
MATCH (a:Person) OPTIONAL MATCH (a)-[:FRIEND]->(b) RETURN a.name, b.name
Gotcha: A WHERE clause directly after OPTIONAL MATCH is part of the optional pattern — it filters within the optional match, but non-matching rows still come through with null values for the optional variables (instead of being dropped).
// This returns ALL :Person rows, with null for `b` when no FRIEND has age > 30 MATCH (a:Person) OPTIONAL MATCH (a)-[:FRIEND]->(b) WHERE b.age > 30 RETURN a.name, b.name
To filter rows OUT entirely (drop rows where the optional pattern didn't match the WHERE), use WITH to bridge between the OPTIONAL MATCH and the WHERE:
// This drops :Person rows that don't have a FRIEND with age > 30 MATCH (a:Person) OPTIONAL MATCH (a)-[:FRIEND]->(b) WITH a, b WHERE b.age > 30 RETURN a.name, b.name
WHERE
Filters results. Used after MATCH or WITH.
MATCH (n:Person) WHERE n.age > 30 RETURN n MATCH (n:Person) WHERE n.name STARTS WITH 'A' AND n.age >= 25 RETURN n
RETURN
Projects output columns. Supports aliases, DISTINCT, and * for all variables.
MATCH (n:Person) RETURN n.name AS personName MATCH (n:Person) RETURN DISTINCT n.city MATCH (n:Person) RETURN *
WITH
Pipes intermediate results between query parts. Same syntax as RETURN.
MATCH (n:Person) WITH n.name AS name, n.age AS age WHERE age > 30 RETURN name
ORDER BY / SKIP / LIMIT
Sorting and pagination. SKIP and LIMIT accept integers or $parameter refs.
MATCH (n:Person) RETURN n.name ORDER BY n.age DESC MATCH (n:Person) RETURN n SKIP 5 LIMIT 10
CREATE
Creates nodes and relationships.
CREATE (n:Person {name: 'Alice', age: 30}) CREATE (a)-[:FRIEND {since: 2020}]->(b)
MERGE
Match-or-create. Finds the pattern if it exists, creates it if not.
MERGE (p:Person {name: 'Alice'}) ON CREATE SET p.created = timestamp() ON MATCH SET p.lastSeen = timestamp()
SET
Updates properties on nodes or relationships.
MATCH (n:Person {name: 'Alice'}) SET n.age = 31 MATCH (n:Person {name: 'Alice'}) SET n += {city: 'NYC'}
DELETE / DETACH DELETE
Removes nodes and relationships. DETACH DELETE also removes connected edges.
MATCH (n:Person {name: 'Alice'}) DELETE n MATCH (n:Person {name: 'Alice'}) DETACH DELETE n
REMOVE
Removes properties from nodes.
MATCH (n:Person) REMOVE n.age
UNWIND
Expands a list into individual rows.
UNWIND [1, 2, 3] AS x RETURN x UNWIND [{name: 'Alice'}, {name: 'Bob'}] AS props CREATE (p:Person) SET p.name = props.name
FOREACH
Iterates over a list and executes mutation clauses per element.
MATCH (a:Person {name: 'Alice'}) FOREACH (name IN ['Bob', 'Carol'] | CREATE (a)-[:FRIEND]->(:Person {name: name}) )
CALL { subquery }
Runs a subquery for each incoming row.
MATCH (p:Person) CALL { WITH p MATCH (p)-[:FRIEND]->(f) RETURN count(f) AS friendCount } RETURN p.name, friendCount
UNION / UNION ALL
Combines results from multiple queries. UNION deduplicates; UNION ALL keeps all.
MATCH (a:Person) RETURN a.name AS name UNION MATCH (b:Company) RETURN b.name AS name
CREATE INDEX / DROP INDEX
CREATE INDEX ON :Person(name) DROP INDEX ON :Person(name)
Property index over current values. On bitemporal graphs, CREATE INDEX also enables a temporal property index that supports AT / AT VALID / AT RECORDED predicates soundly — the naive "use current index, filter by time" shortcut silently misses nodes whose past value matched but whose current differs. The temporal index is built lazily per (label, prop), persisted on compaction, and shows up in EXPLAIN as TemporalIndexSeek.
SHOW Commands
SHOW INDEXES -- list all property indexes SHOW LABELS -- list all node labels SHOW RELATIONSHIP TYPES -- list all edge types SHOW CONSTRAINTS -- list unique constraints
EXPLAIN
EXPLAIN MATCH (n:Person) RETURN n -- returns the execution plan without running the query
Built-in Procedures
CALL db.schema() -- graph schema: labels, types CALL db.schema.visualization() -- renderable schema: one node per label, one edge per type CALL db.indexStats() -- per-index entry counts CALL db.rebuildIndex('Person', 'name') -- rebuild an index CALL db.procedures() -- list all available procedures CALL db.index.fulltext.queryNodes('Article', 'title', 'graph database') YIELD node, score RETURN node.title, score -- full-text search with relevance scoring CALL db.index.vector.queryNodes('Doc', 'embedding', [0.1, 0.9], 5) YIELD node, score RETURN node.title, score -- vector similarity (cosine) search, top-K results -- Temporal procedures CALL db.changes('2025-01-01T00:00:00Z', '2025-03-01T00:00:00Z') YIELD timestamp, operation, entityType, id, labels, properties -- audit log: all mutations in a time range CALL db.changes('2025-01-01T00:00:00Z', '2025-03-01T00:00:00Z', 'Person') YIELD timestamp, operation, id, properties -- filter changes to a specific label CALL db.diff('2025-01-01T00:00:00Z', '2025-03-01T00:00:00Z') YIELD entityId, entityType, changeType, label, property, before, after -- snapshot diff: one row per change (added/removed/changed) between two points MATCH (p:Person {name: 'Alice'}) CALL db.history(id(p)) YIELD version, operation, validFrom, validTo, properties -- full version history of an entity MATCH (p:Policy {id: $id}) CALL db.propertyHistory(id(p), 'premium') YIELD value, validFrom, validTo -- value intervals for a specific property
Patterns
Node Patterns
() -- anonymous node (n) -- bound to variable n (n:Person) -- with label (n:Person:Employee) -- multiple labels (n:Person {name: 'Alice'}) -- with property filter
Relationship Patterns
-[r:FRIEND]-> -- outgoing, typed <-[r:FRIEND]- -- incoming, typed -[r:FRIEND]- -- undirected --> -- outgoing shorthand <-- -- incoming shorthand -- -- undirected shorthand -[:FRIEND|COWORKER]-> -- type alternation -[r:FRIEND {since: 2020}]-> -- with property filter
Variable-Length Paths
-[*1..3]-> -- 1 to 3 hops -[*2]-> -- exactly 2 hops -[*]-> -- 1 to 10 hops (default) -[:FRIEND*1..5]-> -- typed variable-length
Maximum depth is capped at 10.
Named Paths
p = (a:Person)-[:FRIEND*1..3]->(b:Person) RETURN p, nodes(p), relationships(p)
The path variable is an array of alternating [node, edge, node, edge, ...].
shortestPath / allShortestPaths
-- Find the shortest path between two nodes MATCH p = shortestPath((a:Person {name:'Alice'})-[*]-(b:Person {name:'Charlie'})) RETURN p, length(p) -- Find all shortest paths (all paths at minimum depth) MATCH p = allShortestPaths((a:Person)-[*]-(b:Person)) RETURN p
Uses BFS to find the shortest path(s). Supports typed relationships and property filters. Maximum depth defaults to 15.
Expressions
Literals
42 -- integer 3.14 -- float 'hello' -- string true / false -- boolean null -- null [1, 2, 3] -- list {name: 'Alice'} -- map
CASE Expression
Generic form:
CASE WHEN n.age > 30 THEN 'senior' WHEN n.age > 20 THEN 'junior' ELSE 'unknown' END
Simple form:
CASE n.status WHEN 'active' THEN 1 WHEN 'inactive' THEN 0 ELSE -1 END
List Indexing & Slicing
list[0] -- first element list[-1] -- last element list[1..3] -- slice (exclusive end)
Out-of-bounds access returns null.
List Comprehension
[x IN [1,2,3,4,5] WHERE x > 2 | x * 10] -- [30, 40, 50] [x IN range(1, 5) | x * x] -- [1, 4, 9, 16, 25]
Pattern Comprehension
-- Inline pattern matching that returns a list MATCH (a:Person) RETURN [(a)-[:FRIEND]->(f) | f.name] AS friends -- With WHERE filter MATCH (a:Person) RETURN [(a)-[:FRIEND]->(f) WHERE f.age > 25 | f.name] AS olderFriends
Evaluates a pattern inline per row and returns a list of projected values.
Map Projection
-- Select specific properties from a node MATCH (n:Person) RETURN n { .name, .age } AS person -- Mix shorthand and computed properties MATCH (n:Person) RETURN n { .name, upperName: toUpper(n.name) } AS person
Creates a map from selected properties. Shorthand .prop reads from the object; explicit key: expr evaluates an expression.
EXISTS Subquery
MATCH (n:Person) WHERE exists { MATCH (n)-[:FRIEND]->() } RETURN n
COUNT Subquery
MATCH (n:Person) RETURN n.name, count { MATCH (n)-[:FRIEND]->() } AS friendCount
Parameters
Query parameters use $name syntax and are resolved from the execution context.
MATCH (n:Person {name: $name}) RETURN n MATCH (n:Person) RETURN n LIMIT $limit
Operators
Comparison
= | Equal |
<> != | Not equal |
< > <= >= | Ordering |
Boolean
AND | Logical and |
OR | Logical or |
XOR | Exclusive or |
NOT | Negation |
Arithmetic
+ - * / % | Standard math |
^ | Power |
+ | String concatenation |
String
STARTS WITH | Prefix match |
ENDS WITH | Suffix match |
CONTAINS | Substring match |
=~ | Regex (full-string, JS syntax) |
Null & List
IS NULL | Null check |
IS NOT NULL | Non-null check |
IN | List membership |
Aggregate Functions
Aggregate functions group rows and reduce them. All support DISTINCT: count(DISTINCT x).
| Function | Description |
|---|---|
count(expr) / count(*) | Count values or rows |
collect(expr) | Collect into a list |
sum(expr) | Sum numeric values |
avg(expr) | Average |
min(expr) / max(expr) | Min / max |
stdev(expr) / stdevp(expr) | Sample / population std dev |
percentileCont(expr, pct) | Continuous percentile |
percentileDisc(expr, pct) | Discrete percentile |
String Functions
| Function | Description |
|---|---|
toLower(str) / toUpper(str) | Case conversion |
trim(str) / ltrim(str) / rtrim(str) | Whitespace trimming |
replace(str, search, repl) | Replace all occurrences |
substring(str, start, len?) | Extract substring |
left(str, n) / right(str, n) | First / last n chars |
split(str, delim) | Split into list |
reverse(str) | Reverse |
toString(val) | Convert to string |
Math Functions
| Function | Description |
|---|---|
abs(x) | Absolute value |
round(x) / floor(x) / ceil(x) | Rounding |
sign(x) | Sign (-1, 0, 1) |
sqrt(x) | Square root |
log(x) / log10(x) | Logarithm |
exp(x) | e^x |
rand() | Random [0, 1) |
| Function | Description |
|---|---|
sin cos tan | Trigonometric |
asin acos atan atan2 | Inverse trig |
degrees(x) / radians(x) | Angle conversion |
pi() / e() | Constants |
List Functions
| Function | Description |
|---|---|
head(list) / last(list) | First / last element |
tail(list) | All except first |
size(list) | Length (also strings) |
range(start, end, step?) | Integer list (inclusive) |
reverse(list) | Reverse |
coalesce(a, b, ...) | First non-null |
Type Functions
| Function | Description |
|---|---|
toInteger(val) / toFloat(val) / toBoolean(val) | Type conversion |
toStringOrNull / toIntegerOrNull / toFloatOrNull / toBooleanOrNull | Null-safe conversion |
valueType(val) | Type name string |
Graph Functions
| Function | Description |
|---|---|
id(entity) | Internal ID |
labels(node) | List of labels |
type(rel) | Relationship type |
properties(entity) / keys(entity) | Property map / keys |
startNode(rel) / endNode(rel) | Source / target node |
nodes(path) / relationships(path) | Extract from path |
length(path) | Path or list length |
timestamp() | Epoch milliseconds |
Graph Embeddings
Graphiquity computes deterministic 128-dimensional structural fingerprints from graph topology. These capture label, property, degree, and neighborhood patterns using a Weisfeiler-Lehman style algorithm. No ML models or external APIs required.
Precompute embeddings
Store _embedding and _embeddingAt on every node of a label:
CALL db.materializeEmbeddings('Claim') YIELD label, nodesUpdated, dimensions, embeddingAt
Similarity search
Find structurally similar nodes (requires embeddings materialized first):
MATCH (c:Claim {id: 'claim_123'}) CALL db.similar(c, 10) YIELD node, score RETURN node.id, node.status, score
Compute embedding (without storing)
MATCH (c:Claim {id: 'claim_123'}) CALL db.computeEmbedding(c) YIELD nodeId, embedding, dimensions
Cosine similarity function
MATCH (a:Claim), (b:Claim) WHERE a.id = 'c1' AND b.id = 'c2' RETURN db.vectorSimilarity(a._embedding, b._embedding) AS similarity
Use cases
- Fraud detection — find claims structurally similar to known fraud (same repair shops, doctors, adjusters)
- Risk clustering — find policies with similar relationship patterns
- Anomaly detection — find nodes that look nothing like their peers
How it works
| Dimensions | What they capture |
|---|---|
| 0–15 | Node labels (feature-hashed) |
| 16–31 | Property key/value pairs |
| 32–47 | Degree features (in/out, per-type) |
| 48–79 | 1-hop neighbor label distribution |
| 80–111 | Relationship type + direction |
| 112–127 | 2-hop WL signature (neighbor-of-neighbor patterns) |
Embeddings are L2-normalized for cosine similarity. The algorithm is deterministic —
same graph state always produces the same vector. Refresh with
db.materializeEmbeddings after significant graph changes.
Graph Algorithms
GDS-style graph analytics as built-in Cypher procedures. All algorithms accept optional label and relationship type filters to operate on subgraphs — no projection catalog needed.
Centrality
PageRank — iterative centrality scoring based on incoming link structure.
-- PageRank with defaults (20 iterations, 0.85 damping) CALL db.pageRank({iterations: 20, dampingFactor: 0.85}) YIELD nodeId, score RETURN nodeId, score ORDER BY score DESC LIMIT 10 -- Filter to a subgraph and write scores back CALL db.pageRank({labels: ['Person'], relationshipTypes: ['KNOWS'], writeProperty: 'pagerank'}) YIELD nodeId, score
Degree centrality — count edges per node (in, out, or both).
CALL db.degreeCentrality({labels: ['Movie'], relationshipTypes: ['ACTED_IN'], direction: 'in'}) YIELD nodeId, score RETURN nodeId, score ORDER BY score DESC
Community Detection
Connected components — union-find algorithm identifies isolated clusters.
CALL db.connectedComponents() YIELD nodeId, componentId, componentSize RETURN componentId, componentSize, collect(nodeId) AS members ORDER BY componentSize DESC
Label propagation — community detection by iterative neighbor voting.
CALL db.labelPropagation({labels: ['Person'], relationshipTypes: ['KNOWS'], writeProperty: 'community'}) YIELD nodeId, communityId, communitySize RETURN communityId, communitySize, collect(nodeId) AS members
Path Finding
Shortest path — BFS (unweighted) or Dijkstra (weighted).
-- Unweighted (BFS) CALL db.shortestPath('node_1', 'node_2', {direction: 'out'}) YIELD found, path, totalWeight, nodeCount, relationshipCount -- Weighted (Dijkstra) CALL db.shortestPath('node_1', 'node_2', {weightProperty: 'cost'}) YIELD found, path, totalWeight
Link Prediction
Common neighbors — count of shared neighbors between two nodes.
CALL db.linkPrediction.commonNeighbors('node_1', 'node_2') YIELD score
Adamic-Adar — weights shared neighbors by inverse log of their degree.
CALL db.linkPrediction.adamicAdar('node_1', 'node_2') YIELD score
Jaccard — intersection / union of neighbor sets (0-1 normalized).
CALL db.linkPrediction.jaccard('node_1', 'node_2') YIELD score
Preferential attachment — product of both nodes' degrees.
CALL db.linkPrediction.preferentialAttachment('node_1', 'node_2') YIELD score
Predict — top-K link predictions for a single node. The key product feature: “What should this node be connected to?”
MATCH (c:Claim {id: 'claim_123'}) CALL db.linkPrediction.predict(c, 10, { algorithm: 'adamicAdar', candidateLabel: 'Claim', excludeExisting: true }) YIELD node, score RETURN node, score -- With explanation (shows why the prediction was made) MATCH (c:Claim {id: 'claim_123'}) CALL db.linkPrediction.predict(c, 5, { algorithm: 'adamicAdar', candidateLabel: 'Claim', explain: true }) YIELD node, score, sharedNeighbors, contributions
Similarity
kNN graph — construct k-nearest-neighbor graph from embedding cosine similarity.
-- Requires db.materializeEmbeddings to have been run first CALL db.knnGraph({label: 'Claim', k: 5}) YIELD node1, node2, similarity -- Write similarity edges back to the graph CALL db.knnGraph({label: 'Claim', k: 5, writeRelationshipType: 'SIMILAR_TO'}) YIELD node1, node2, similarity
Triangle Count & Clustering
Triangle count — counts triangles per node and computes local clustering coefficient.
CALL db.triangleCount() YIELD nodeId, triangles, coefficient RETURN nodeId, triangles, coefficient ORDER BY triangles DESC
Anomaly Scoring
db.anomalyScore — identifies anomalous nodes by comparing structural properties against the population.
Three methods: degree (z-score of node degree), community (fraction of neighbors outside same community),
and structural (composite of both, default).
-- Structural anomaly (default) — composite score CALL db.anomalyScore({labels: ['Claim'], method: 'structural'}) YIELD nodeId, anomalyScore, details RETURN nodeId, anomalyScore, details.zScore, details.communityAnomaly ORDER BY anomalyScore DESC LIMIT 20 -- Degree anomaly — z-score vs population mean CALL db.anomalyScore({labels: ['Person'], method: 'degree'}) YIELD nodeId, anomalyScore, details -- Community anomaly — neighbor community mismatch CALL db.anomalyScore({labels: ['Account'], method: 'community'}) YIELD nodeId, anomalyScore, details
Options: labels, relationshipTypes, method ('degree'|'community'|'structural'), atTime
Returns: nodeId, anomalyScore (0-1), details (method-specific breakdown)
Pipelines
db.pipeline — run a sequence of algorithm steps as a single operation. Useful for building derived feature sets.
CALL db.pipeline([ {type: 'materializeEmbeddings', label: 'Claim'}, {type: 'pageRank', labels: ['Claim'], writeProperty: 'pagerank'}, {type: 'knnGraph', label: 'Claim', k: 5, writeRelationshipType: 'SIMILAR_TO'}, {type: 'anomalyScore', labels: ['Claim'], method: 'structural'} ]) YIELD step, ms, rowsProcessed RETURN step, ms, rowsProcessed
Step types: materializeEmbeddings, pageRank, connectedComponents,
labelPropagation, triangleCount, anomalyScore, knnGraph.
Each step accepts the same options as its standalone procedure.
Temporal Analytics
All algorithms accept an atTime option to run against graph state at a specific point in time.
No other graph database supports this.
-- PageRank as of January 1, 2025 CALL db.pageRank({atTime: '2025-01-01T00:00:00Z'}) YIELD nodeId, score RETURN nodeId, score ORDER BY score DESC LIMIT 10 -- Connected components 6 months ago CALL db.connectedComponents({atTime: '2024-07-01T00:00:00Z'}) YIELD nodeId, componentId, componentSize
Algorithm diff — compare algorithm results between two points in time.
-- How did PageRank change between Q1 and Q3? CALL db.algorithm.diff('pageRank', '2025-01-01T00:00:00Z', '2025-07-01T00:00:00Z') YIELD nodeId, change, t1Value, t2Value, delta RETURN nodeId, change, delta ORDER BY abs(delta) DESC LIMIT 20
Options Reference
| Option | Type | Used By | Description |
|---|---|---|---|
labels | string[] | All centrality/community | Filter nodes by label |
relationshipTypes | string[] | All | Filter edges by type |
iterations | number | PageRank, LabelPropagation | Algorithm iterations |
dampingFactor | number | PageRank | Damping factor (default 0.85) |
direction | string | DegreeCentrality, ShortestPath | 'in', 'out', or 'both' |
weightProperty | string | ShortestPath | Edge weight for Dijkstra |
writeProperty | string | PageRank, Components, LabelProp | Write scores back to nodes |
k | number | knnGraph | Neighbor count (default 5) |
atTime | string | All | Run against graph state at this ISO timestamp |
Examples
Build a social network
CREATE (alice:Person {name: 'Alice', age: 30}) CREATE (bob:Person {name: 'Bob', age: 25}) MATCH (a:Person {name: 'Alice'}), (b:Person {name: 'Bob'}) CREATE (a)-[:FRIEND {since: 2020}]->(b)
Aggregation with grouping
MATCH (p:Person)-[:FRIEND]->(f) RETURN p.name, count(f) AS friends ORDER BY friends DESC
Batch upsert
UNWIND [{name: 'Alice'}, {name: 'Bob'}] AS props MERGE (p:Person {name: props.name}) ON CREATE SET p.status = 'new' ON MATCH SET p.status = 'existing'
Variable-length path
MATCH p = (a:Person {name: 'Alice'})-[:FRIEND*1..3]->(b) RETURN b.name, length(relationships(p)) AS distance
List comprehension
WITH [1,2,3,4,5,6,7,8,9,10] AS nums RETURN [x IN nums WHERE x % 2 = 0 | x * x] AS evenSquares
Base URL
https://api.graphiquity.com
All endpoints are relative to this base. Requests and responses use application/json.
Authentication
All authenticated requests require the Authorization header.
API Key Recommended for apps
Create an API key on the API Keys page. Pass it as a Bearer token:
Authorization: Bearer gq_a1b2c3d4e5f6...
API keys grant access to query, batch, import, history, and graph management endpoints. Both the /e/ (direct engine) and Lambda-proxied paths accept API keys. Store keys securely — they cannot be retrieved after creation.
JWT Token For web apps
Authenticate via Cognito to get an ID token:
Authorization: Bearer eyJraWQiOiJ...
JWT tokens grant access to all endpoints and expire after 1 hour.
Base URL
Graph operations (query, import, batch, history, graphs, backup) use the engine direct path with the /e/ prefix:
https://api.graphiquity.com/e/query
https://api.graphiquity.com/e/graphs/{name}/import
https://api.graphiquity.com/e/graphs/{name}/schema
The /e/ prefix routes directly to the EC2 engine — no Lambda in the path, lower latency, higher timeout. Both JWT and API key auth are supported.
POST /query
Execute a Cypher query against a graph. JWT API Key
Request Body
| Field | Type | Description | |
|---|---|---|---|
graph | string | Required | Graph name |
cypher | string | Required | Cypher query |
parameters | object | Optional | Query parameters ($param syntax) |
atTime | string | Optional | ISO 8601 timestamp for time travel |
limit | integer | Optional | Max results to return (API-level, max 10,000) |
offset | integer | Optional | Skip first N results (for pagination) |
Response
{
"status": 200,
"data": [
{ "name": "Alice", "age": 30 },
{ "name": "Bob", "age": 25 }
],
"totalCount": 100, // present when limit/offset used
"offset": 0,
"limit": 25
}
Examples
curl -X POST https://api.graphiquity.com/query \ -H "Authorization: Bearer gq_YOUR_API_KEY" \ -H "Content-Type: application/json" \ -d '{ "graph": "my-graph", "cypher": "MATCH (n:Person) WHERE n.age > $minAge RETURN n.name, n.age", "parameters": { "minAge": 25 } }'
const API_KEY = 'gq_YOUR_API_KEY'; const BASE = 'https://api.graphiquity.com'; async function query(graph, cypher, parameters = {}) { const res = await fetch(`${BASE}/query`, { method: 'POST', headers: { 'Authorization': `Bearer ${API_KEY}`, 'Content-Type': 'application/json', }, body: JSON.stringify({ graph, cypher, parameters }), }); const json = await res.json(); if (json.error) throw new Error(json.error); return json.data; } // Usage const people = await query('my-graph', 'MATCH (n:Person) WHERE n.age > $minAge RETURN n.name, n.age', { minAge: 25 } );
import requests API_KEY = "gq_YOUR_API_KEY" BASE = "https://api.graphiquity.com" def query(graph, cypher, parameters=None, at_time=None): body = {"graph": graph, "cypher": cypher} if parameters: body["parameters"] = parameters if at_time: body["atTime"] = at_time resp = requests.post( f"{BASE}/query", headers={"Authorization": f"Bearer {API_KEY}"}, json=body, ) data = resp.json() if "error" in data: raise Exception(data["error"]) return data["data"] # Usage people = query("my-graph", "MATCH (n:Person) WHERE n.age > $minAge RETURN n.name, n.age", {"minAge": 25} )
package main import ("bytes"; "encoding/json"; "fmt"; "net/http") const apiKey = "gq_YOUR_API_KEY" const baseURL = "https://api.graphiquity.com" func query(graph, cypher string, params map[string]interface{}) (map[string]interface{}, error) { body, _ := json.Marshal(map[string]interface{}{ "graph": graph, "cypher": cypher, "parameters": params, }) req, _ := http.NewRequest("POST", baseURL+"/query", bytes.NewReader(body)) req.Header.Set("Authorization", "Bearer "+apiKey) req.Header.Set("Content-Type", "application/json") resp, err := http.DefaultClient.Do(req) if err != nil { return nil, err } defer resp.Body.Close() var result map[string]interface{} json.NewDecoder(resp.Body).Decode(&result) return result, nil }
require 'net/http'; require 'json'; require 'uri' API_KEY = "gq_YOUR_API_KEY" BASE = "https://api.graphiquity.com" def query(graph, cypher, parameters: {}) uri = URI("#{BASE}/query") req = Net::HTTP::Post.new(uri) req["Authorization"] = "Bearer #{API_KEY}" req["Content-Type"] = "application/json" req.body = { graph: graph, cypher: cypher, parameters: parameters }.to_json res = Net::HTTP.start(uri.hostname, uri.port, use_ssl: true) { |h| h.request(req) } JSON.parse(res.body)["data"] end
import java.net.http.*; import java.net.URI; HttpRequest req = HttpRequest.newBuilder() .uri(URI.create("https://api.graphiquity.com/query")) .header("Authorization", "Bearer " + API_KEY) .header("Content-Type", "application/json") .POST(HttpRequest.BodyPublishers.ofString(""" {"graph":"my-graph","cypher":"MATCH (n:Person) RETURN n","parameters":{}} """)).build(); HttpResponse<String> res = HttpClient.newHttpClient() .send(req, HttpResponse.BodyHandlers.ofString()); System.out.println(res.body());
using var client = new HttpClient(); client.DefaultRequestHeaders.Add("Authorization", $"Bearer {apiKey}"); var body = JsonSerializer.Serialize(new { graph = "my-graph", cypher = "MATCH (n:Person) WHERE n.age > $minAge RETURN n", parameters = new { minAge = 25 } }); var res = await client.PostAsync("https://api.graphiquity.com/query", new StringContent(body, Encoding.UTF8, "application/json")); Console.WriteLine(await res.Content.ReadAsStringAsync());
<?php $ch = curl_init('https://api.graphiquity.com/query'); curl_setopt_array($ch, [ CURLOPT_POST => true, CURLOPT_RETURNTRANSFER => true, CURLOPT_HTTPHEADER => [ 'Authorization: Bearer gq_YOUR_API_KEY', 'Content-Type: application/json', ], CURLOPT_POSTFIELDS => json_encode([ 'graph' => 'my-graph', 'cypher' => 'MATCH (n:Person) RETURN n.name', ]), ]); $result = json_decode(curl_exec($ch), true); print_r($result['data']); ?>
Creating Data
Use CREATE and MERGE via the query endpoint.
// Create a node await query('my-graph', `CREATE (p:Person {name: $name, age: $age}) RETURN p`, { name: 'Alice', age: 30 }); // Create a relationship await query('my-graph', ` MATCH (a:Person {name: $from}), (b:Person {name: $to}) CREATE (a)-[:FRIEND {since: $year}]->(b)`, { from: 'Alice', to: 'Bob', year: 2024 }); // Batch upsert await query('my-graph', ` UNWIND $people AS props MERGE (p:Person {name: props.name}) ON CREATE SET p.age = props.age`, { people: [{name:'Alice',age:30}, {name:'Bob',age:25}] });
# Create a node query("my-graph", "CREATE (p:Person {name: $name, age: $age})", {"name": "Alice", "age": 30}) # Batch upsert query("my-graph", """ UNWIND $people AS props MERGE (p:Person {name: props.name}) ON CREATE SET p.age = props.age """, {"people": [{"name":"Alice","age":30}, {"name":"Bob","age":25}]})
curl -X POST https://api.graphiquity.com/query \ -H "Authorization: Bearer gq_YOUR_API_KEY" \ -H "Content-Type: application/json" \ -d '{"graph":"my-graph","cypher":"CREATE (:Person {name:$name})", "parameters":{"name":"Alice"}}'
Batch
Execute up to 1,000 queries in a single request against the same graph. Queries run sequentially and count as one rate-limit hit. Ideal for bulk data loading.
| Field | Type | Description |
|---|---|---|
graph | string | Target graph name |
queries | array | Array of query objects or strings (max 1,000) |
queries[].cypher | string | Cypher statement |
queries[].parameters | object | Optional parameters for the query |
const res = await fetch('https://api.graphiquity.com/batch', { method: 'POST', headers: { 'Authorization': `Bearer ${apiKey}`, 'Content-Type': 'application/json' }, body: JSON.stringify({ graph: 'my-graph', queries: [ { cypher: 'CREATE (:Person {name: $name})', parameters: { name: 'Alice' } }, { cypher: 'CREATE (:Person {name: $name})', parameters: { name: 'Bob' } }, 'MATCH (a:Person {name:"Alice"}), (b:Person {name:"Bob"}) CREATE (a)-[:FRIEND]->(b)' ] }) }); const { data } = await res.json(); console.log(`${data.succeeded} succeeded, ${data.failed} failed`);
import requests res = requests.post("https://api.graphiquity.com/batch", headers={"Authorization": f"Bearer {api_key}"}, json={ "graph": "my-graph", "queries": [ {"cypher": "CREATE (:Person {name: $name})", "parameters": {"name": "Alice"}}, {"cypher": "CREATE (:Person {name: $name})", "parameters": {"name": "Bob"}}, ] }) data = res.json()["data"] print(f'{data["succeeded"]} succeeded, {data["failed"]} failed')
curl -X POST https://api.graphiquity.com/batch \ -H "Authorization: Bearer gq_YOUR_API_KEY" \ -H "Content-Type: application/json" \ -d '{"graph":"my-graph","queries":[ {"cypher":"CREATE (:Person {name:$name})","parameters":{"name":"Alice"}}, {"cypher":"CREATE (:Person {name:$name})","parameters":{"name":"Bob"}} ]}'
Response
{
"status": 200,
"data": {
"results": [
{ "data": { "columns": [], "rows": [], "stats": { "nodesCreated": 1 } } },
{ "data": { "columns": [], "rows": [], "stats": { "nodesCreated": 1 } } }
],
"succeeded": 2,
"failed": 0,
"total": 2
}
}
Bulk Import
Import up to 50,000 operations per chunk. Returns 202 immediately with a job ID — processing runs asynchronously with cooperative yielding so it won't block queries or other writes. Poll GET /jobs/{jobId} for progress.
Supports mergeNode/mergeEdge for idempotent upserts. For large imports, send multiple chunks with "final": false on every chunk except the very last one. Only the final chunk triggers compaction. If final is omitted it defaults to true. Node and edge IDs are always system-generated; use ref tags to cross-reference within a batch.
POST /graphs/{name}/import
| Field | Type | Description | |
|---|---|---|---|
operations | array | Required | Array of operation objects (max 50,000) |
operations[].op | string | Required | One of: createNode, createEdge, mergeNode, mergeEdge, updateNode, updateEdge, deleteNode, deleteEdge, removeNodeProps, removeNodeLabel |
final | boolean | Optional | Set to false on all chunks except the last. Only the final chunk triggers compaction. Defaults to true. |
actor | string | Optional | Actor name for provenance tracking |
Operation Shapes
| Operation | Required Fields | Optional |
|---|---|---|
createNode | labels (array) | properties |
createEdge | sourceId, targetId, type | properties |
mergeNode | labels (array), matchProperties (object) | onCreate, onMatch, ref |
mergeEdge | type, plus endpoints via sourceId/targetId (ID or ref) or source/target objects | matchProperties, onCreate, onMatch, ref |
updateNode | nodeId, updates (object) | |
updateEdge | edgeId, updates (object) | |
deleteNode | nodeId | |
deleteEdge | edgeId | |
removeNodeProps | nodeId, properties (array of keys) | |
removeNodeLabel | nodeId, label |
const res = await fetch('https://api.graphiquity.com/e/graphs/my-graph/import', { method: 'POST', headers: { 'Authorization': `Bearer ${apiKey}`, 'Content-Type': 'application/json' }, body: JSON.stringify({ actor: 'etl-pipeline', operations: [ // Idempotent upsert — creates if new, updates if exists { op: 'mergeNode', labels: ['Person'], matchProperties: { email: 'alice@co.com' }, onCreate: { name: 'Alice', age: 30 }, onMatch: { lastSeen: '2026-03-14' }, ref: 'alice' }, { op: 'mergeNode', labels: ['Person'], matchProperties: { email: 'bob@co.com' }, onCreate: { name: 'Bob', age: 25 }, ref: 'bob' }, // ref resolves to the matched/created node's _id { op: 'mergeEdge', sourceId: 'alice', targetId: 'bob', type: 'FRIEND', onCreate: { since: 2024 } }, // Plain creates also work in the same batch { op: 'createNode', labels: ['Company'], properties: { name: 'Acme' } } ] }) }); const { operations, created, updated, ms } = await res.json(); console.log(`${created} created, ${updated} updated in ${ms}ms`);
import requests res = requests.post("https://api.graphiquity.com/e/graphs/my-graph/import", headers={"Authorization": f"Bearer {api_key}"}, json={ "actor": "etl-pipeline", "operations": [ {"op": "mergeNode", "labels": ["Person"], "matchProperties": {"email": "alice@co.com"}, "onCreate": {"name": "Alice"}, "ref": "alice"}, {"op": "mergeEdge", "sourceId": "alice", "targetId": "b1", "type": "FRIEND", "onCreate": {"since": 2024}}, ] }) data = res.json() print(f'{data["created"]} created in {data["ms"]}ms')
curl -X POST https://api.graphiquity.com/e/graphs/my-graph/import \ -H "Authorization: Bearer gq_YOUR_API_KEY" \ -H "Content-Type: application/json" \ -d '{"actor":"etl","operations":[ {"op":"mergeEdge","type":"FRIEND", "source":{"labels":["Person"],"matchProperties":{"email":"alice@co.com"}}, "target":{"labels":["Person"],"matchProperties":{"email":"bob@co.com"}}, "onCreate":{"since":2024}} ]}'
Response
{
"status": 200,
"operations": 4,
"created": 2,
"merged": 2, // matched existing entities
"updated": 0,
"deleted": 0,
"errors": 0,
"commitSeq": 42,
"ms": 156
}
If any individual operations fail, the response includes errors count and a failures array (capped at first 20). Successful operations within the batch are still committed.
Merge Semantics
mergeNode matches by labels + matchProperties (uses indexes when available, falls back to label scan). If found, applies onMatch updates. If not, creates a new node with matchProperties + onCreate combined.
mergeEdge matches by source + target + type + optional matchProperties. Same create/update logic. Endpoints can be specified three ways:
- By ID:
"sourceId": "node-abc-123"— direct internal node ID - By ref:
"sourceId": "alice"— resolves to the ID from a priormergeNodewith"ref": "alice"in the same batch - By properties:
"source": {"labels": ["Person"], "matchProperties": {"email": "alice@co.com"}}— finds existing node by label + properties (uses indexes when available)
The source/target object form is ideal for edge-only imports where nodes already exist in the graph. Use ref chaining when importing nodes and edges together in the same batch.
Fast Import (recommended for bulk loads)
For maximum throughput, use POST /graphs/{name}/fast-import. This bypasses the storage mutation API and appends directly to .dat files, then rebuilds all indexes. 100x+ faster than the standard import. Max 100K ops per call. Returns 202 with a job ID.
const res = await fetch('https://api.graphiquity.com/e/graphs/my-graph/fast-import', { method: 'POST', headers: { 'Authorization': `Bearer ${apiKey}`, 'Content-Type': 'application/json' }, body: JSON.stringify({ operations: largeOpsArray }) }); const { data: { jobId } } = await res.json(); // Poll for completion let status = 'running'; while (status === 'running') { await new Promise(r => setTimeout(r, 2000)); const job = await (await fetch(`https://api.graphiquity.com/e/jobs/${jobId}`, { headers: { 'Authorization': `Bearer ${apiKey}` } })).json(); status = job.data.status; console.log(job.data.progress); }
curl -X POST https://api.graphiquity.com/e/graphs/my-graph/fast-import \ -H "Authorization: Bearer gq_YOUR_API_KEY" \ -H "Content-Type: application/json" \ -d '{"operations":[{"op":"createNode","labels":["Person"],"properties":{"name":"Alice"}}]}' # → {"status":202,"data":{"jobId":"fastimport_...","polling":"/jobs/..."}}
Large Imports via S3 Upload
For payloads exceeding 10MB, upload directly to S3 to bypass API Gateway's body size limit:
// 1. Get a presigned upload URL const urlRes = await fetch('https://api.graphiquity.com/e/graphs/my-graph/import/upload-url', { headers: { 'Authorization': `Bearer ${apiKey}` } }); const { uploadUrl, s3Key } = await urlRes.json(); // 2. Upload import data directly to S3 (no size limit) await fetch(uploadUrl, { method: 'PUT', headers: { 'Content-Type': 'application/json' }, body: JSON.stringify({ operations: largeOperationsArray, // any size final: true }) }); // 3. Trigger processing const res = await fetch('https://api.graphiquity.com/e/graphs/my-graph/import/from-s3', { method: 'POST', headers: { 'Authorization': `Bearer ${apiKey}`, 'Content-Type': 'application/json' }, body: JSON.stringify({ s3Key }) }); const result = await res.json(); console.log(`${result.created} created, ${result.merged} merged`);
# 1. Get upload URL curl -s https://api.graphiquity.com/e/graphs/my-graph/import/upload-url \ -H "Authorization: Bearer gq_YOUR_API_KEY" # → {"uploadUrl":"https://s3...","s3Key":"imports/my-graph/imp-xxx.json"} # 2. Upload to S3 (use the uploadUrl from step 1) curl -X PUT "UPLOAD_URL_HERE" \ -H "Content-Type: application/json" \ -d @my-large-import.json # 3. Process curl -X POST https://api.graphiquity.com/e/graphs/my-graph/import/from-s3 \ -H "Authorization: Bearer gq_YOUR_API_KEY" \ -H "Content-Type: application/json" \ -d '{"s3Key":"imports/my-graph/imp-xxx.json"}'
Backup & Restore
Create point-in-time backups stored in S3. Restore any backup to recover data after accidental deletion or corruption.
POST /graphs/{name}/backup
Creates a compressed tarball of the graph's snapshot, committed.json, and history, then uploads to S3. Returns the backup key for later restoration.
const res = await fetch('https://api.graphiquity.com/e/graphs/my-graph/backup', { method: 'POST', headers: { 'Authorization': `Bearer ${apiKey}` } }); const { key, size, seq } = await res.json(); console.log(`Backed up seq ${seq} → ${key} (${Math.round(size/1024)}KB)`);
curl -X POST https://api.graphiquity.com/e/graphs/my-graph/backup \
-H "Authorization: Bearer gq_YOUR_API_KEY"
GET /graphs/{name}/backups
Lists all available S3 backups for a graph, sorted newest first.
POST /graphs/{name}/restore
Restores a graph from an S3 backup. Overwrites existing data.
| Field | Type | Description | |
|---|---|---|---|
backupKey | string | Required | S3 key from backup or list-backups response |
POST /graphs/{name}/flush
Force-publishes the writer's current state as a new snapshot. Useful after imports to make data visible to readers immediately.
POST /graphs/{name}/rebuild-derived
Rebuilds all derived data structures (CSR indexes, sidecar indexes, bucket files, ID map) from canonical .dat files. Use after a failed import or compaction corrupts derived data. Returns counts of rebuilt artifacts.
GET /metrics
Prometheus-format metrics: heap usage, RSS, query counts, latency percentiles (p50/p95/p99), admission control stats, and per-graph commit sequences.
Temporal Queries (Time Travel)
Every mutation is versioned automatically. Query the past, diff snapshots, and audit changes — no configuration needed.
Temporal Functions
-- Check if an entity existed at a specific time MATCH (p:Person)-[r:ACTED_IN]->(m:Movie) WHERE temporal.validAt(r, '2024-06-15T00:00:00Z') RETURN p.name, m.title -- Entity age in days MATCH (n:Person) RETURN n.name, temporal.age(n) / 86400000 AS ageDays ORDER BY ageDays DESC LIMIT 10 -- Check if two relationships existed at the same time MATCH (p)-[r1:EMPLOYED_BY]->(c1), (p)-[r2:EMPLOYED_BY]->(c2) WHERE c1 <> c2 AND temporal.overlaps(r1, r2) RETURN p.name, c1.name, c2.name
Audit & Diff Examples
-- Changes per day (temporal aggregation) CALL db.changes('2025-01-01T00:00:00Z', '2025-12-31T23:59:59Z') YIELD timestamp, operation RETURN left(timestamp, 10) AS day, operation, count(*) AS ops ORDER BY day -- Most volatile entities CALL db.changes('2025-01-01T00:00:00Z', '2025-12-31T23:59:59Z') YIELD id, operation RETURN id, count(*) AS changeCount ORDER BY changeCount DESC LIMIT 10 -- Snapshot diff: what changed? CALL db.diff('2025-01-01T00:00:00Z', '2025-03-01T00:00:00Z') YIELD addedNodes, removedNodes, changedNodes, changed RETURN addedNodes, removedNodes, changedNodes, changed
AT Syntax
Use AT directly in Cypher for temporal queries — no API options needed.
-- Query the graph as it was on Jan 1, 2025 MATCH (p:Person {name: 'Tom Hanks'}) AT '2025-01-01T00:00:00Z' RETURN p -- Temporal join: resolve each match at a different time MATCH (claim:Claim {id: $claimId}) WITH claim, claim.lossDate AS t MATCH (cust:Customer) AT t RETURN cust -- Property history: when did premium change? MATCH (p:Policy {id: $id}) CALL db.propertyHistory(id(p), 'premium') YIELD value, validFrom, validTo RETURN value, validFrom, validTo
Point-in-Time API
Pass atTime as an ISO 8601 timestamp via the API.
const res = await fetch(`${BASE}/query`, { method: 'POST', headers: { 'Authorization': `Bearer ${API_KEY}`, 'Content-Type': 'application/json' }, body: JSON.stringify({ graph: 'my-graph', cypher: 'MATCH (n:Person) RETURN n.name, n.age', atTime: '2024-01-01T00:00:00Z', }), });
result = query("my-graph", "MATCH (n:Person) RETURN n", at_time="2024-01-01T00:00:00Z")
curl -X POST https://api.graphiquity.com/query \ -H "Authorization: Bearer gq_YOUR_API_KEY" \ -H "Content-Type: application/json" \ -d '{"graph":"my-graph","cypher":"MATCH (n) RETURN n","atTime":"2024-01-01T00:00:00Z"}'
Query Modes All authenticated
The same graph data is queryable via five interfaces. All operate over the same storage layer. Select the mode in the console dropdown, or call the API directly.
Cypher (Native)
The primary query language. Full graph pattern matching, temporal queries, aggregations, mutations.
MATCH (p:Person)-[:FILED]->(c:Claim) WHERE c.amount > 10000 RETURN p.name, c.amount
SQL
Familiar relational syntax translated to Cypher internally. Labels map to tables, properties to columns.
POST /e/graphs/{name}/sql { "sql": "SELECT name, age FROM Person WHERE age > 30 ORDER BY age LIMIT 10" } // JOINs map to relationships: { "sql": "SELECT p.name, c.amount FROM Person p JOIN Claim c ON FILED" }
GraphQL
Auto-generated schema from graph metadata. Standard GraphQL queries and introspection.
POST /e/graphs/{name}/graphql { "query": "{ Person(limit: 5) { name age } }" } // Schema introspection: GET /e/graphs/{name}/graphql/schema
Document API (REST)
RESTful CRUD over labels as document collections. No query language needed.
GET /e/graphs/{name}/collections/Person?age.gt=30&limit=10&sort=-age
Natural Language
Ask questions in plain English via the console UI. Translates to Cypher using your configured AI model. Requires an AI API key configured in Settings.
Audit Edition Forensics tier
Bitemporal queries, tamper-evident audit trail, schema versioning, WORM mode, access logging, and compliance dashboards. Built for regulated industries — insurance, fraud investigation, audit, provenance — where you need to reconstruct exactly what your system knew when. Per-graph opt-in via the Settings page or PUT /graphs/{name}/config.
Bitemporal Queries — Two Time Axes
A unitemporal database tracks one time dimension. A bitemporal database tracks two: valid time (when a fact was true in the world) and transaction time (when the system recorded its belief about the fact). The difference matters for queries like “what did our underwriting model know on the date we bound this policy?”
-- Single-axis query (back-compat) MATCH (p:Policy) AT '2025-01-15' RETURN p -- Explicit valid-time only (same result) MATCH (p:Policy) AT VALID '2025-01-15' RETURN p -- Transaction-time only: what did we know on June 1? MATCH (p:Policy) AT RECORDED '2025-06-01' RETURN p -- Full bitemporal: what did we know on June 1 about the state on Jan 15? MATCH (p:Policy) AT VALID '2025-01-15' RECORDED '2025-06-01' RETURN p -- Order is independent MATCH (p:Policy) AT RECORDED '2025-06-01' VALID '2025-01-15' RETURN p
Bitemporal mode is per-graph and permanent once enabled. Pre-enable records use synthetic recorded times equal to their valid times. Records after bitemporalEnabledAt have full bitemporal history. To revert: restore from a backup taken before enabling.
Retroactive Backdating Writes
The auditor's "underwriter typo'd, fix the original record without losing what we used to believe" scenario. Set the validFrom body parameter on POST /query to backdate _valid_from on every mutation in the query. _recorded_from stays at real now, so the resolver can answer at any (valid, recorded) corner.
POST /e/query { "graph": "insurance", "cypher": "MATCH (p:Policy {id: 'p1'}) SET p.risk_score = 21", "validFrom": "2024-06-01T00:00:00.000Z" }
After this write, querying at recordedAt before the correction returns the original value (12). Querying as of now returns the corrected value (21). Querying at validAt='2024-06-01' returns 21 (the corrected value, valid since the backdated date). Rejected on non-bitemporal graphs.
Bitemporal Audit Procedures
The change-feed procedures are bitemporal-aware:
-- db.diff with body recordedAt = "what changed in valid time, as we knew on Apr 1" CALL db.diff('2025-01-01', '2025-03-01') YIELD entityId, changeType RETURN * // + body { "recordedAt": "2025-04-01" } -- db.changes axis arg ('valid' default | 'recorded') CALL db.changes('2025-06-01', '2025-06-30', null, 'recorded') YIELD entityId, operation RETURN * // "show me corrections recorded in this window, regardless of valid time" -- db.changeRate also takes an axis arg as the 5th positional CALL db.changeRate('2025-01-01', '2025-12-31', 'day', null, 'recorded')
Schema Versioning — “What fields existed when?”
Every putSchema call appends a versioned entry to history/schema.log with timestamp, actor, and a SHA-256 prevHash chain. Auditors can ask “what did the schema look like when this record was written?”
-- Diff schemas at two points in time CALL db.schemaDiff('2025-01-01', '2025-06-01') YIELD changeType, kind, name, property RETURN changeType, kind, name, property ORDER BY changeType, kind
Or via REST:
GET /graphs/{name}/schema/definition?at=2025-06-01
GET /graphs/{name}/schema/history
Tamper-Evident Audit Chain
Every history entry (nodes and edges) carries a _prev_hash field equal to the SHA-256 of the previous chained entry's serialized JSONL form. Tampering with any entry breaks the chain — the verifier walks from the start and reports the exact line where the mismatch occurs. Pre-chain entries (legacy data written before this feature shipped) are counted but not individually verifiable.
-- Verify the entire audit chain CALL db.verifyAuditChain() YIELD verified, nodesChained, edgesChained, nodesPreChain, nodesHeadHash, edgesHeadHash, nodesBrokenAt RETURN *
Or via REST:
GET /graphs/{name}/audit/verify
WORM (Write-Once-Read-Many) Mode
Lock a graph against all mutations for legal hold or post-incident preservation. Reads still work. Lifting WORM is itself an audit-logged action. Toggle from the Settings page or via the API.
PUT /graphs/{name}/config
{ "worm": true }
While WORM is enabled, any mutation returns 403 with WORM_LOCKED.
Access Log
Per-query audit log: every query against the graph is recorded with timestamp, actor, tenant, source IP, cypher (truncated 500 chars), result count, status, and duration. Stored at {graphDir}/audit/access.log. Opt-in via config.accessLog: true.
GET /graphs/{name}/audit/access?from=2025-01-01&to=2025-12-31&limit=100
Compliance Dashboard & Export
One-click export of everything an auditor needs:
chain-verify-report.json— output ofdb.verifyAuditChain()schema-timeline.json— every schema version with timestampsaccess-log.jsonl— every query in the date rangetenant-config.json— tenancy + audit-tier flagsinventory.json— label/edge countsmanifest.json— SHA-256 of every file in the bundleREADME.md— written for the auditor, not the developer
POST /graphs/{name}/compliance/export
{ "from": "2025-01-01", "to": "2025-12-31" }
// Returns S3 key + manifest of files in the bundle
Or visit the compliance dashboard in the app for the visual version.
Graph Clone — “Try Before You Commit”
Bitemporal mode is permanent. To experiment safely, clone the graph first, enable bitemporal on the clone, run your real queries against it, and only flip the original once you're sure.
POST /graphs/{newName}/clone-from/{sourceName}
// Snapshot-aware: hardlinks .dat files where possible (zero extra disk)
Graph Management JWT only
GET /graphs
List all graphs.
POST /graphs
Create a graph.
| Field | Type | Description | |
|---|---|---|---|
name | string | Required | Letters, numbers, hyphens, underscores. Max 64 chars. |
description | string | Optional | Description |
DELETE /graphs/{name}
Delete a graph.
API Key Management JWT only
GET /apikeys
List keys (prefixes only).
POST /apikeys
Create a key. The full key is only shown once.
// Response { "status": 201, "data": { "key": "gq_a1b2c3...", "prefix": "gq_a1b2c3d", "name": "My Key" } }
DELETE /apikeys/{prefix}
Revoke a key by prefix.
Graph Schema
GET /graphs/{name}/schema
Returns the labels and edge types present in the graph. Useful for autocomplete and introspection.
// Response { "status": 200, "data": { "labels": ["Person", "Company"], "edgeTypes": ["FRIEND", "WORKS_AT"] } }
Contracts (Schema Validation)
Contracts enforce per-label and per-edge-type schemas — type checking, required fields, unique constraints, and regex patterns. Each contract has an enforcement mode:
| Mode | Behavior |
|---|---|
OFF | No validation (default) |
WARN | Writes succeed but response includes warnings array |
STRICT | Invalid writes are rejected with HTTP 400 |
GET /graphs/{name}/contracts
List all contracts defined for the graph.
// Response { "status": 200, "data": [ { "kind": "node", "label": "Person", "mode": "STRICT", "properties": { "name": { "type": "string", "required": true, "maxLength": 200 }, "email": { "type": "string", "unique": true }, "age": { "type": "integer" } } } ] }
PUT /graphs/{name}/contracts
Create or replace a contract. The request body is the contract object.
| Field | Type | Description | |
|---|---|---|---|
kind | string | Required | "node" or "edge" |
label | string | Node label (required when kind=node) | |
relType | string | Edge type (required when kind=edge) | |
mode | string | OFF | WARN | STRICT | |
properties | object | Map of property name → definition |
Property definition fields: type (string|integer|float|boolean|timestamp|json|vector), required, unique, maxLength, pattern, enum, dims.
// JavaScript await fetch("https://api.graphiquity.com/e/graphs/myGraph/contracts", { method: "PUT", headers: { Authorization: `Bearer ${token}`, "Content-Type": "application/json" }, body: JSON.stringify({ kind: "node", label: "Person", mode: "STRICT", properties: { name: { type: "string", required: true, maxLength: 200 }, email: { type: "string", unique: true, pattern: "^[^@]+@[^@]+$" }, age: { type: "integer" } } }) });
# Python import requests requests.put( "https://api.graphiquity.com/e/graphs/myGraph/contracts", headers={"Authorization": f"Bearer {token}"}, json={ "kind": "node", "label": "Person", "mode": "STRICT", "properties": { "name": {"type": "string", "required": True, "maxLength": 200}, "email": {"type": "string", "unique": True}, "age": {"type": "integer"} } } )
DELETE /graphs/{name}/contracts
Remove a contract. Pass {"label": "Person"} for node contracts or {"relType": "FRIEND"} for edge contracts.
Cypher Constraint Syntax
You can also create and drop unique constraints via Cypher:
// Create a unique constraint CREATE CONSTRAINT ON (p:Person) ASSERT p.email IS UNIQUE // Drop a unique constraint DROP CONSTRAINT ON (p:Person) ASSERT p.email IS UNIQUE
Usage Analytics
GET /usage
Returns per-day operation counts for the tenant. Data is retained for 90 days.
// Response { "status": 200, "data": [ { "date": "2026-03-02", "operation": "query", "count": 142 }, { "date": "2026-03-02", "operation": "createGraph", "count": 1 } ] }
User Management JWT only
GET /users
List all users in the tenant.
POST /users/invite
Invite a user. Requires owner role.
| Field | Type | Description | |
|---|---|---|---|
email | string | Required | Email address |
role | string | Optional | owner or member (default) |
DELETE /users/{userId}
Remove a user. Requires owner role.
Errors
{ "status": 400, "error": "graph and cypher are required" }
| Status | Meaning |
|---|---|
200 | Success |
201 | Created |
400 | Bad request (missing fields, invalid Cypher) |
401 | Unauthorized |
403 | Forbidden (insufficient role) |
404 | Not found |
409 | Conflict (duplicate resource) |
429 | Rate limit exceeded (200 req/min free tier) |
500 | Internal error |
Large Results
Responses that exceed 6 MB are automatically offloaded to S3. Instead of inline data, the response includes a resultUrl field containing a presigned URL (valid for 15 minutes). Fetch from this URL to retrieve the full result. The console handles this transparently.
{
"status": 200,
"resultUrl": "https://graphiquity-results.s3.amazonaws.com/...",
"queryStatus": "completed",
"resultCount": 2500
}
Quick Start
1. Get an API key
Sign up, then go to API Keys and create one. Copy it immediately.
2. Create a graph
Use the Dashboard or the POST /graphs endpoint.
3. Query away
const gq = (cypher, p) => fetch('https://api.graphiquity.com/query', { method: 'POST', headers: { Authorization: 'Bearer gq_YOUR_KEY', 'Content-Type': 'application/json' }, body: JSON.stringify({ graph: 'my-graph', cypher, parameters: p }), }).then(r => r.json()).then(r => r.data); await gq("CREATE (:Person {name:'Alice'})-[:FRIEND]->(:Person {name:'Bob'})"); const friends = await gq("MATCH (a)-[:FRIEND]->(b) RETURN a.name, b.name"); // [{ "a.name": "Alice", "b.name": "Bob" }]
import requests def gq(cypher, p={}): return requests.post("https://api.graphiquity.com/query", headers={"Authorization": "Bearer gq_YOUR_KEY"}, json={"graph": "my-graph", "cypher": cypher, "parameters": p}).json()["data"] gq("CREATE (:Person {name:'Alice'})-[:FRIEND]->(:Person {name:'Bob'})") friends = gq("MATCH (a)-[:FRIEND]->(b) RETURN a.name, b.name") # [{"a.name": "Alice", "b.name": "Bob"}]
Graph Visualization
The graph console includes three interactive visualization features that overlay analysis results directly on the Cytoscape graph view. All features work on any graph that has been queried and rendered in graph mode.
Each feature adds a "Clear" control to restore the normal view when you're done.
Shortest Path Highlight
Visually trace the shortest path between any two nodes in the graph view.
How to use
- Right-click a node → select "Find Path From Here"
- The node is marked with a green ring (path start)
- Right-click a second node → select "Find Path To Here"
- The shortest path is highlighted:
| Start node | Green border |
| End node | Red border |
| Path nodes | Blue glow |
| Path edges | Blue highlight, 4px width |
| Other elements | Dimmed to 15% opacity |
An info bar shows the path length (node count and distance). Click "Clear Path" to restore normal view.
Backend
CALL db.shortestPath('startNodeId', 'endNodeId') YIELD path, distance -- BFS (unweighted) or Dijkstra (weighted)
Both nodes must exist in the current graph view. Start and end must be in the same result cell.
Temporal Diff Overlay
Compare the graph at two points in time and see what was added, removed, or changed.
How to use
- Click the "Temporal Diff" button in the console toolbar (clock icon)
- Select two timestamps: T1 (before) and T2 (after)
- Click "Compare"
- Nodes and edges in the current graph view are color-coded:
| Added | Green glow — exists at T2 but not T1 |
| Removed | Red dashed border — exists at T1 but not T2 |
| Changed | Yellow glow — properties changed between T1 and T2 |
| Unchanged | Dimmed to 30% opacity |
Click a changed node to see a property diff panel showing old → new values for each modified property.
A legend bar with colored dots appears at the bottom-right. Click "Clear Diff" to restore normal view.
Backend
CALL db.diff('2025-01-01', '2025-06-01') YIELD entityId, entityType, changeType, label, property, before, after -- Snapshot diff: one row per change between two points in time
Timestamps accept ISO 8601 format or short dates (2025, 2025-01, 2025-01-15). The diff only highlights elements that are already visible in the graph view — run a broad query first to see more coverage.
Vector Similarity Clusters
Visualize semantic neighborhoods by coloring nodes based on their vector similarity to a reference node.
Prerequisites
The node's label must have a vector index. Create one first if needed:
CREATE VECTOR INDEX my_index FOR (n:Document) ON n.embedding
How to use
- Right-click a node → select "Show Similar Nodes"
- The system detects vector indexes for the node's label, then finds the 20 most similar nodes
- Results are overlaid on the current graph view:
| Reference node | Purple diamond shape |
| High similarity | Red/orange (score near 1.0) |
| Medium similarity | Yellow (score ~0.5) |
| Low similarity | Blue (score near 0.0) |
| Non-similar | Dimmed to 15% opacity |
Hover over a colored node to see its similarity score as a tooltip.
A gradient legend bar appears at the bottom-right. Click "Clear Similarity" to restore normal view.
Backend
-- Uses db.similar (structural embeddings) CALL db.similar('nodeId', 20) YIELD node, score -- Falls back to db.vectorSearch if available CALL db.vectorSearch('indexName', embedding, 20) YIELD node, score