.\" Copyright (C) 2020 Jens Axboe .\" Copyright (C) 2020 Red Hat, Inc. .\" .\" SPDX-License-Identifier: LGPL-2.0-or-later .\" .TH io_uring_queue_init 3 "July 10, 2020" "liburing-0.7" "liburing Manual" .SH NAME io_uring_queue_init \- setup io_uring submission and completion queues .SH SYNOPSIS .nf .B #include .PP .BI "int io_uring_queue_init(unsigned " entries "," .BI " struct io_uring *" ring "," .BI " unsigned " flags ");" .PP .BI "int io_uring_queue_init_params(unsigned " entries "," .BI " struct io_uring *" ring "," .BI " struct io_uring_params *" params ");" .PP .BI "int io_uring_queue_init_mem(unsigned " entries "," .BI " struct io_uring *" ring "," .BI " struct io_uring_params *" params "," .BI " void *" buf ", size_t " buf_size ");" .fi .SH DESCRIPTION .PP The .BR io_uring_queue_init (3) function executes the .BR io_uring_setup (2) system call to initialize the submission and completion queues in the kernel with at least .I entries entries in the submission queue and then maps the resulting file descriptor to memory shared between the application and the kernel. By default, the CQ ring will have twice the number of entries as specified by .I entries for the SQ ring. This is adequate for regular file or storage workloads, but may be too small for networked workloads. The SQ ring entries do not impose a limit on the number of in-flight requests that the ring can support, it merely limits the number that can be submitted to the kernel in one go (batch). if the CQ ring overflows, e.g. more entries are generated than fits in the ring before the application can reap them, then if the kernel supports .B IORING_FEAT_NODROP the ring enters a CQ ring overflow state. Otherwise it drops the CQEs and increments .I cq.koverflow in .I stuct io_uring with the number of CQEs dropped. The overflow state is indicated by .B IORING_SQ_CQ_OVERFLOW being set in the SQ ring flags. Unless the kernel runs out of available memory, entries are not dropped, but it is a much slower completion path and will slow down request processing. For that reason it should be avoided and the CQ ring sized appropriately for the workload. Setting .I cq_entries in .I struct io_uring_params will tell the kernel to allocate this many entries for the CQ ring, independent of the SQ ring size in given in .IR entries . If the value isn't a power of 2, it will be rounded up to the nearest power of 2. On success, .BR io_uring_queue_init (3) returns 0 and .I ring will point to the shared memory containing the io_uring queues. On failure .BR -errno is returned. .I flags will be passed through to the io_uring_setup syscall (see .BR io_uring_setup (2)). The .BR io_uring_queue_init_params (3) and .BR io_uring_queue_init_mem (3) variants will pass the parameters indicated by .I params straight through to the .BR io_uring_setup (2) system call. The .BR io_uring_queue_init_mem (3) variant uses the provided .I buf with associated size .I buf_size as the memory for the ring, using the .B IORING_SETUP_NO_MMAP flag to .BR io_uring_setup (2). The buffer passed to .BR io_uring_queue_init_mem (3) must already be zeroed. Typically, the caller should allocate a huge page and pass that in to .BR io_uring_queue_init_mem (3). Pages allocated by mmap are already zeroed. .BR io_uring_queue_init_mem (3) returns the number of bytes used from the provided buffer, so that the app can reuse the buffer with the returned offset to put more rings in the same huge page. On success, the resources held by .I ring should be released via a corresponding call to .BR io_uring_queue_exit (3). .SH RETURN VALUE .BR io_uring_queue_init (3) and .BR io_uring_queue_init_params (3) return 0 on success and .BR -errno on failure. .BR io_uring_queue_init_mem (3) returns the number of bytes used from the provided buffer on success, and .BR -errno on failure. .SH SEE ALSO .BR io_uring_setup (2), .BR io_uring_register_ring_fd (3), .BR mmap (2), .BR io_uring_queue_exit (3)