Spring Boot PostgreSQL Optimization: Beyond Default JPA
The defaults shipped with a perfectly adequate illustration. We have since raised our standards.
The Defaults Are Polite, Not Optimal
Spring Boot ships with defaults that prioritize getting you running — and they do that job well. HikariCP's connection pool, Hibernate's query generation, JPA's fetch strategies all work out of the box. But "works" and "works well at scale" are separated by configuration, not code rewrites.
The gap shows up in predictable places: the connection pool is sized for a single developer, not production traffic. Hibernate issues one INSERT per save() call instead of batching. JPA fetches every column even when you need two. Lazy loading works until serialization triggers a cascade of N+1 queries.
This guide covers the optimizations that close that gap: connection pool sizing, query layer tuning, batch operations, projection optimization, fetch strategy, pagination, and monitoring. Each section introduces the concept, provides the essential configuration, and links to a deeper guide where one exists.
The examples use Spring Boot 3.x with Spring Data JPA and the PostgreSQL JDBC driver (pgjdbc). Most configurations also apply to Spring Boot 2.x with minor syntax differences.
HikariCP Pool Sizing — Fewer Connections, More Throughput
HikariCP is Spring Boot's default connection pool, and its default maximumPoolSize of 10 is closer to optimal than you might expect. The counterintuitive reality of connection pooling is that fewer connections often deliver higher throughput.
The foundational formula, from the PostgreSQL wiki:
connections = (core_count * 2) + effective_spindle_count For a 4-core server with SSD storage (effective spindle count of 1), that gives (4 * 2) + 1 = 9 connections. A pool of 10 is in the right neighborhood. A pool of 50, on the other hand, will cause context switching, lock contention, and cache thrashing on the PostgreSQL server — all of which reduce throughput rather than increase it.
Configure the pool in application.properties or application.yml:
# application.yml
spring:
datasource:
hikari:
maximum-pool-size: 10
minimum-idle: 10 # Fixed pool — no idle connection churn
connection-timeout: 30000 # 30s — max wait for a connection from the pool
idle-timeout: 600000 # 10m — irrelevant when minimum-idle = maximum-pool-size
max-lifetime: 1800000 # 30m — connections recycled before PostgreSQL's timeout
leak-detection-threshold: 30000 # 30s — log a warning if a connection is held this long Fixed pool vs dynamic pool. When minimumIdle equals maximumPoolSize, the pool is fixed — HikariCP never creates or destroys connections during normal operation. This eliminates the overhead of connection churn and is almost always the correct choice. A dynamic pool (where minimumIdle < maximumPoolSize) only makes sense if your application has extreme traffic variance and you need to release connections during quiet periods.
leakDetectionThreshold is, in my experience, the single most useful setting that most teams never enable. Set it to a value longer than your longest expected transaction (30 seconds is a safe starting point). If any connection is checked out from the pool for longer than this threshold, HikariCP logs a warning with the stack trace of the code that checked it out. This catches connection leaks before they cause pool exhaustion in production.
Monitoring the pool. Expose HikariCP metrics through Spring Boot Actuator and Micrometer:
management:
endpoints:
web:
exposure:
include: health,metrics,prometheus
metrics:
tags:
application: ${spring.application.name} Key metrics to watch:
hikaricp.connections.active— connections currently in usehikaricp.connections.idle— connections available in the poolhikaricp.connections.pending— threads waiting for a connectionhikaricp.connections.timeout— connection acquisition timeouts (should be zero)
For a deep dive on pool math, see HikariCP Pool Sizing for PostgreSQL. For general connection pooling concepts, see Connection Pooling.
Connection Leak Prevention
A connection leak occurs when application code checks out a connection from the pool and never returns it. Over time, leaked connections accumulate until the pool is exhausted and all new requests block on connectionTimeout, eventually timing out. It is a quiet problem until it is not.
The pattern in production looks like this:
hikaricp.connections.activeclimbs steadily and never dropshikaricp.connections.pendingspikes- Request latency increases across all endpoints
- Requests start failing with
SQLTransientConnectionException: Connection is not available, request timed out
A few common patterns deserve your attention:
@Async methods that access the database. The connection is checked out on the async thread but may not be returned if the method throws an exception:
// Dangerous: if this throws, the connection may leak
@Async
public void processOrder(Long orderId) {
Order order = orderRepository.findById(orderId).orElseThrow();
// ... processing that might throw
} Manual DataSource.getConnection() without a try-with-resources:
// Leak if an exception occurs before conn.close()
Connection conn = dataSource.getConnection();
PreparedStatement ps = conn.prepareStatement("SELECT ...");
// ... if this throws, conn is never closed
// Safe: try-with-resources guarantees cleanup
try (Connection conn = dataSource.getConnection();
PreparedStatement ps = conn.prepareStatement("SELECT ...")) {
// ...
} Missing @Transactional on methods that perform multiple repository calls. Without a transaction boundary, Spring may check out and return connections in ways that are difficult to predict, and error paths may not clean up properly.
For the full leak prevention guide, see Spring @Async Connection Leak with PostgreSQL.
JPQL vs Native Queries — When the ORM Steps Aside
JPA's query language (JPQL) maps to SQL through Hibernate's query parser. For standard CRUD operations and entity-mapped results, JPQL is sufficient and portable. But PostgreSQL offers capabilities that JPQL simply cannot express — and it is worth knowing where that boundary lies.
When JPQL is sufficient:
@Query("SELECT p FROM Post p WHERE p.published = true ORDER BY p.createdAt DESC")
List<Post> findPublishedPosts();
@Query("SELECT p FROM Post p JOIN p.author a WHERE a.verified = true")
List<Post> findPostsByVerifiedAuthors(); When native queries win:
// Window functions — not available in JPQL
@Query(value = """
SELECT id, title, view_count,
RANK() OVER (PARTITION BY category_id ORDER BY view_count DESC) AS category_rank
FROM posts
WHERE published = true
""", nativeQuery = true)
List<Object[]> findPostsWithCategoryRank(); // CTEs — recursive queries for tree structures
@Query(value = """
WITH RECURSIVE category_tree AS (
SELECT id, name, parent_id, 0 AS depth
FROM categories
WHERE parent_id IS NULL
UNION ALL
SELECT c.id, c.name, c.parent_id, ct.depth + 1
FROM categories c
JOIN category_tree ct ON c.parent_id = ct.id
)
SELECT * FROM category_tree ORDER BY depth, name
""", nativeQuery = true)
List<Object[]> findCategoryTree(); // PostgreSQL-specific aggregation
@Query(value = """
SELECT author_id,
jsonb_agg(jsonb_build_object('id', id, 'title', title) ORDER BY created_at DESC) AS posts
FROM posts
WHERE published = true
GROUP BY author_id
""", nativeQuery = true)
List<Object[]> findPostsGroupedByAuthor(); The JPQL cache. Hibernate parses and validates JPQL queries at startup, caching the parsed representation. Native queries skip this step — they are passed directly to the JDBC driver. For applications with thousands of JPQL queries, the startup parse time and plan cache memory can be notable.
I should be forthcoming about the trade-off with native queries: they bypass Hibernate's entity mapping validation. A column rename in a migration will cause a JPQL query to fail at startup (a compile-time-equivalent error), but a native query will fail at runtime when a user reaches the endpoint. This trade-off — expressiveness at the cost of compile-time safety — means native queries deserve thorough test coverage.
Batch Inserts — From 1,000 Round Trips to 1
By default, JPA issues one INSERT statement per save() call. Saving 1,000 entities means 1,000 round trips to PostgreSQL. Network latency alone — even 0.5ms per trip — adds 500ms to the operation. This is an area where a small configuration change yields a very large improvement.
Enabling Batch Inserts
spring:
jpa:
properties:
hibernate:
jdbc:
batch_size: 50
order_inserts: true
order_updates: true batch_size tells Hibernate to group INSERT statements into batches of 50. order_inserts and order_updates sort statements by entity type so that inserts for the same table are grouped together — without this, interleaved entity types break batching.
The Identity Generation Problem
The most common reason batch inserts silently fail to batch — and I emphasize silently — is the ID generation strategy:
// This BREAKS batching entirely
@Id
@GeneratedValue(strategy = GenerationType.IDENTITY)
private Long id; IDENTITY uses PostgreSQL's SERIAL/BIGSERIAL column, which requires Hibernate to execute each INSERT individually and immediately read back the generated ID via RETURNING id (or a separate SELECT). Hibernate cannot batch inserts when it needs the ID of each row before inserting the next.
The remedy is to use SEQUENCE:
@Id
@GeneratedValue(strategy = GenerationType.SEQUENCE, generator = "post_seq")
@SequenceGenerator(name = "post_seq", sequenceName = "post_id_seq", allocationSize = 50)
private Long id; With SEQUENCE and an allocationSize of 50, Hibernate fetches 50 IDs at once from the sequence (SELECT nextval('post_id_seq')) and assigns them in memory. This allows it to batch 50 INSERTs together without any intermediate round trips.
Verifying Batching Is Active
Enable Hibernate statistics to confirm batching:
spring:
jpa:
properties:
hibernate:
generate_statistics: true Look for log output like:
Session Metrics {
23416 nanoseconds spent acquiring 1 JDBC connections;
0 nanoseconds spent releasing 0 JDBC connections;
142357 nanoseconds spent preparing 20 JDBC statements;
3254891 nanoseconds spent executing 20 JDBC statements;
1523456 nanoseconds spent executing 20 JDBC batches;
} The key indicator: executing N JDBC batches should appear and the number of JDBC statements should be considerably smaller than the number of entities saved. You can also verify on the PostgreSQL side using pg_stat_statements — look for a single parameterized INSERT with a high calls count rather than many individual INSERTs.
When to Bypass JPA Entirely
For very large bulk operations — tens of thousands or millions of rows — even batched JPA inserts are slower than necessary. JPA still instantiates entities, tracks dirty state, and processes lifecycle callbacks. Sometimes the most helpful thing an ORM can do is step aside.
JdbcTemplate.batchUpdate() skips entity management:
@Autowired
private JdbcTemplate jdbcTemplate;
public void bulkInsertEvents(List<Event> events) {
jdbcTemplate.batchUpdate(
"INSERT INTO events (type, payload, created_at) VALUES (?, ?::jsonb, ?)",
new BatchPreparedStatementSetter() {
@Override
public void setValues(PreparedStatement ps, int i) throws SQLException {
Event e = events.get(i);
ps.setString(1, e.getType());
ps.setString(2, e.getPayload());
ps.setTimestamp(3, Timestamp.valueOf(e.getCreatedAt()));
}
@Override
public int getBatchSize() {
return events.size();
}
}
);
} For maximum throughput on truly large loads, PostgreSQL's COPY protocol streams data directly into the table without SQL parsing. See Spring Batch with PostgreSQL COPY Performance for implementation details.
For the full guide on identity vs sequence and batch insert configuration, see Hibernate Batch Insert: Identity vs Sequence.
EntityGraph and the Cartesian Product Trap
@EntityGraph provides declarative fetch plans — a cleaner alternative to writing JOIN FETCH in JPQL for every query variation. It is well worth understanding, along with its one significant pitfall.
Named vs Ad-Hoc EntityGraphs
Named EntityGraph (defined on the entity):
@Entity
@NamedEntityGraph(
name = "Post.withAuthor",
attributeNodes = @NamedAttributeNode("author")
)
public class Post {
@ManyToOne(fetch = FetchType.LAZY)
private Author author;
@OneToMany(mappedBy = "post", fetch = FetchType.LAZY)
private List<Comment> comments;
} public interface PostRepository extends JpaRepository<Post, Long> {
@EntityGraph("Post.withAuthor")
List<Post> findByPublishedTrue();
} Ad-hoc EntityGraph (defined at the repository method level):
@EntityGraph(attributePaths = {"author"})
List<Post> findByPublishedTrue(); Both generate a LEFT JOIN FETCH for the author association, loading it eagerly in the same query instead of lazily loading each author individually. Without the @EntityGraph, calling post.getAuthor().getName() during serialization triggers a separate SELECT for each post — the classic N+1 pattern.
The Cartesian Product Problem
The pitfall surfaces when you fetch multiple collections:
// DO NOT do this
@EntityGraph(attributePaths = {"comments", "tags"})
List<Post> findByPublishedTrue(); If a post has 10 comments and 5 tags, the resulting SQL JOIN produces 10 * 5 = 50 rows for that single post. Across 100 posts, the result set grows to thousands of rows with considerable duplication. Hibernate deduplicates in memory, but the data transfer from PostgreSQL and the memory consumption are far more than the situation warrants.
Fix: use @EntityGraph for at most one collection. For multiple collections, use @BatchSize:
@Entity
public class Post {
@ManyToOne(fetch = FetchType.LAZY)
private Author author;
@OneToMany(mappedBy = "post", fetch = FetchType.LAZY)
@BatchSize(size = 25)
private List<Comment> comments;
@ManyToMany(fetch = FetchType.LAZY)
@BatchSize(size = 25)
private Set<Tag> tags;
} With @BatchSize(size = 25), when Hibernate initializes the comments collection for one post, it proactively initializes the comments for up to 24 other posts in the same query using an IN clause: SELECT * FROM comments WHERE post_id IN (?, ?, ?, ...). This converts an N+1 problem into an N/25 + 1 problem — a meaningful improvement without the cartesian product.
For a deeper analysis, see Spring EntityGraph Cartesian Product.
Projection Optimization — Select Only What You Need
When you load a JPA entity, Hibernate does more than execute SELECT *. It hydrates the entity into the first-level cache (persistence context), sets up dirty checking so it can detect changes at flush time, and creates proxy objects for lazy associations. For read-only operations — list views, API responses, reports — this overhead is doing work that produces no benefit.
Interface Projections
Spring Data supports interface-based projections that return only the requested columns:
public interface PostSummary {
Long getId();
String getTitle();
LocalDateTime getCreatedAt();
String getAuthorName(); // Derived from JOIN — Spring resolves nested paths
} public interface PostRepository extends JpaRepository<Post, Long> {
List<PostSummary> findByPublishedTrue();
} Spring Data generates a query that selects only the columns needed by the interface methods. No entity hydration, no dirty checking, no persistence context entry. For a posts table with 20 columns including large TEXT and JSONB fields, this can reduce both query time and memory usage considerably.
Class-Based Projections (DTOs)
For more control, use a DTO class with a constructor:
public record PostSummaryDto(Long id, String title, LocalDateTime createdAt) {} @Query("SELECT new com.example.PostSummaryDto(p.id, p.title, p.createdAt) FROM Post p WHERE p.published = true")
List<PostSummaryDto> findPublishedSummaries(); The SELECT new syntax tells Hibernate to construct the DTO directly from the query result without creating an entity instance.
Native Query Projections
For native queries, interface projections work with column name mapping:
public interface RevenueByMonth {
String getMonth();
Long getOrderCount();
BigDecimal getRevenue();
}
@Query(value = """
SELECT to_char(created_at, 'YYYY-MM') AS month,
COUNT(*) AS orderCount,
SUM(total) AS revenue
FROM orders
WHERE created_at >= :since
GROUP BY 1 ORDER BY 1
""", nativeQuery = true)
List<RevenueByMonth> findRevenueByMonth(@Param("since") LocalDate since); Column aliases in the native query map to getter methods on the interface (case-insensitive, camelCase resolution).
For a detailed comparison of entity vs projection performance, see Spring Interface Projection vs SELECT *.
N+1 Queries — The Fetch Join Playbook
The N+1 problem in Spring Boot manifests when JPA lazy-loads associations during serialization or iteration. Load 50 posts, serialize them to JSON, and each post's author field triggers a separate SELECT — 51 queries where 1 or 2 would have sufficed.
The Standard Fixes
JOIN FETCH in JPQL:
@Query("SELECT p FROM Post p JOIN FETCH p.author WHERE p.published = true")
List<Post> findPublishedWithAuthor(); This generates a single query with a JOIN, loading both posts and their authors.
@EntityGraph (as covered above):
@EntityGraph(attributePaths = {"author"})
List<Post> findByPublishedTrue(); @BatchSize for lazy loading in batches rather than one at a time:
@ManyToOne(fetch = FetchType.LAZY)
@BatchSize(size = 25)
private Author author; Disable Open Session in View
Spring Boot enables Open Session in View (OSIV) by default. OSIV keeps the Hibernate session (and a database connection) open for the entire HTTP request, including view rendering and JSON serialization. This means lazy loading "just works" during rendering — convenient in the short term, but it masks N+1 queries and holds database connections open far longer than necessary.
spring:
jpa:
open-in-view: false # Disable OSIV With OSIV disabled, accessing a lazy association outside a @Transactional method throws LazyInitializationException. This encourages you to eagerly load everything you need within the service layer — which is exactly where fetch optimization belongs. The additional benefit: connections are returned to the pool as soon as the transaction commits, not when the HTTP response finishes rendering.
For the full OSIV analysis, see Spring Open-in-View and Pool Exhaustion.
Detecting N+1 Queries
Hibernate statistics:
spring:
jpa:
properties:
hibernate:
generate_statistics: true
logging:
level:
org.hibernate.stat: DEBUG After each transaction, Hibernate logs the number of statements executed, entities loaded, and collections initialized. A controller action that loads 50 posts and initializes 50 author collections will show 50 collections initialized — a clear N+1 signal, and one that rewards your attention.
datasource-proxy wraps the DataSource and logs every query with timing:
@Bean
public DataSource dataSource(DataSourceProperties properties) {
HikariDataSource dataSource = properties.initializeDataSourceBuilder()
.type(HikariDataSource.class).build();
return ProxyDataSourceBuilder.create(dataSource)
.name("query-log")
.logQueryBySlf4j(SLF4JLogLevel.INFO)
.countQuery()
.build();
} pg_stat_statements on the PostgreSQL server is the definitive source of truth. Look for queries with disproportionately high calls counts — if a query to the authors table shows a calls count equal to the posts query multiplied by your typical page size, that is an N+1.
For the general N+1 guide, see N+1 Queries: Identification and Prevention.
Specification Queries and Plan Cache
Spring Data JpaSpecificationExecutor enables dynamic queries built at runtime:
public interface OrderRepository extends JpaRepository<Order, Long>,
JpaSpecificationExecutor<Order> {} public static Specification<Order> hasStatus(String status) {
return (root, query, cb) -> cb.equal(root.get("status"), status);
}
public static Specification<Order> createdAfter(LocalDate date) {
return (root, query, cb) -> cb.greaterThan(root.get("createdAt"), date.atStartOfDay());
}
// Dynamic composition
List<Order> results = orderRepository.findAll(
hasStatus("pending").and(createdAfter(thirtyDaysAgo))
); The Plan Cache Problem
Specifications that use IN clauses introduce a plan cache concern that is worth understanding. Each unique IN list length generates a distinct SQL string, and Hibernate caches the parsed plan for each. With variable-length IN lists, the cache fills with thousands of near-identical entries, consuming heap memory and potentially causing OutOfMemoryError.
The remedy — IN clause parameter padding:
spring:
jpa:
properties:
hibernate:
query:
in_clause_parameter_padding: true With padding enabled, Hibernate rounds the IN list length up to the next power of two. A 3-element list becomes 4 (with the last element repeated). A 5-element list becomes 8. This reduces the number of distinct query plans from potentially thousands to a handful.
JDBC-level prepared statement threshold:
spring:
datasource:
url: jdbc:postgresql://localhost:5432/mydb?prepareThreshold=5 prepareThreshold=5 means the driver sends a query as a simple statement for the first 5 executions, then switches to a server-side prepared statement. This avoids filling PostgreSQL's prepared statement cache with one-off queries while still benefiting from prepared statements for repeated ones.
For the full analysis, see Spring Specification IN Clause Cache Explosion and pgjdbc prepareThreshold and Generic Plans.
Pagination — Without the Memory Surprise
Spring Data pagination with Pageable works well until you combine it with JOIN FETCH:
@Query("SELECT p FROM Post p JOIN FETCH p.comments WHERE p.published = true")
Page<Post> findPublishedWithComments(Pageable pageable); Hibernate logs a warning:
HHH000104: firstResult/maxResults specified with collection fetch; applying in memory! What happens: Hibernate fetches all matching posts with their comments — the full un-paginated result set — transfers everything to the JVM, and applies LIMIT/OFFSET in memory. For a table with 100,000 published posts, this loads all 100,000 posts and all their comments into memory to return a page of 20. I'm afraid that is not a sustainable arrangement.
The Two-Query Fix
Separate the ID selection from the entity loading:
// Step 1: Get paginated IDs (lightweight query, no JOIN)
@Query("SELECT p.id FROM Post p WHERE p.published = true ORDER BY p.createdAt DESC")
Page<Long> findPublishedIds(Pageable pageable);
// Step 2: Fetch full entities for those IDs (with eager loading)
@Query("SELECT p FROM Post p JOIN FETCH p.comments WHERE p.id IN :ids")
List<Post> findWithCommentsByIds(@Param("ids") List<Long> ids); // Service layer
Page<Long> idPage = postRepository.findPublishedIds(pageable);
List<Post> posts = postRepository.findWithCommentsByIds(idPage.getContent());
// Wrap in PageImpl if needed for the response The first query uses PostgreSQL's LIMIT/OFFSET (or keyset pagination) to retrieve a small set of IDs. The second query loads only those specific entities with their associations. No in-memory pagination. Two queries, each doing exactly what is asked of it.
For very large datasets where offset-based pagination degrades, switch to keyset pagination. Instead of WHERE ... OFFSET 10000 LIMIT 20, keyset pagination uses WHERE created_at < :last_seen_value ORDER BY created_at DESC LIMIT 20 — consistent performance regardless of how deep into the result set you are.
See Keyset Pagination and Spring Pagination JOIN FETCH Memory.
Monitoring What Matters
Optimization without monitoring is guesswork. Allow me to walk through the tools that provide genuine visibility.
Hibernate Statistics
spring:
jpa:
properties:
hibernate:
generate_statistics: true
logging:
level:
org.hibernate.stat: DEBUG
org.hibernate.SQL: DEBUG
org.hibernate.type.descriptor.sql.BasicBinder: TRACE # Log bound parameter values Hibernate statistics report per-session metrics: statements prepared, statements executed, entities loaded, collections fetched, second-level cache hits/misses, and flush count. These are essential during development and load testing. I would recommend disabling generate_statistics in production — the overhead of collecting statistics on every query is non-trivial.
Actuator and Micrometer
Spring Boot Actuator exposes HikariCP metrics through Micrometer automatically. The most important metrics for a Grafana dashboard:
| Metric | What it tells you |
|---|---|
hikaricp.connections.active | Connections currently serving queries |
hikaricp.connections.idle | Connections available in the pool |
hikaricp.connections.pending | Threads waiting for a connection (should be 0) |
hikaricp.connections.timeout | Pool exhaustion events (should be 0) |
hikaricp.connections.usage (timer) | How long connections are checked out |
Alert on pending > 0 sustained for more than a few seconds, and on any timeout events.
pg_stat_statements
The server-side source of truth — and, in my estimation, the single most valuable monitoring tool for PostgreSQL in production. Query the top statements by total execution time:
SELECT query,
calls,
total_exec_time::numeric(10,2) AS total_ms,
mean_exec_time::numeric(10,2) AS mean_ms,
rows
FROM pg_stat_statements
ORDER BY total_exec_time DESC
LIMIT 20; This reveals the queries consuming the most database time — your highest-impact optimization targets. A query with a 2ms mean but 500,000 calls per day contributes 1,000 seconds of database time — far more impactful than a 500ms query called 100 times. The numbers tell the story clearly.
For query plan analysis, see EXPLAIN ANALYZE Guide.
The Checklist
If you will permit me, a summary of the key settings covered in this guide, ordered by impact.
| Setting | Default | Recommended | Why |
|---|---|---|---|
spring.jpa.open-in-view | true | false | Releases connections at transaction end, not request end; forces proper fetch planning |
hibernate.jdbc.batch_size | 1 (no batching) | 25–50 | Batches INSERT/UPDATE statements, reducing round trips by 25-50x |
@GeneratedValue strategy | Varies | SEQUENCE with allocationSize | IDENTITY breaks batching entirely |
hibernate.order_inserts | false | true | Groups inserts by entity type so batching can work across mixed saves |
hibernate.order_updates | false | true | Same benefit for updates |
hikari.maximum-pool-size | 10 | (cores * 2) + spindles | Right-sized pool avoids contention on PostgreSQL |
hikari.minimum-idle | 10 | Same as maximum-pool-size | Fixed pool eliminates connection churn |
hikari.leak-detection-threshold | 0 (disabled) | 30000 (30s) | Catches connection leaks before pool exhaustion |
hibernate.query.in_clause_parameter_padding | false | true | Reduces plan cache entries for IN clauses |
pgjdbc prepareThreshold | 5 | 5 (default is fine) | Controls client-side to server-side prepared statement transition |
hibernate.generate_statistics | false | true in dev, false in prod | Essential for detecting N+1 and verifying batching |