• Home
  • History
  • Annotate
Name Date Size #Lines LOC

..--

patches/25-Apr-2025-1412

src/25-Apr-2025-3,7482,873

.cargo_vcs_info.jsonD25-Apr-2025106 66

Android.bpD25-Apr-20251.1 KiB4238

CHANGELOG.mdD25-Apr-2025107 168

Cargo.lockD25-Apr-202519.8 KiB777688

Cargo.tomlD25-Apr-20251 KiB3632

Cargo.toml.origD25-Apr-20251 KiB3733

LICENSED25-Apr-202512.5 KiB229190

LICENSE-APACHED25-Apr-202511.1 KiB203169

LICENSE-BSD-3-ClauseD25-Apr-20251.4 KiB2721

METADATAD25-Apr-2025432 2019

MODULE_LICENSE_APACHE2D25-Apr-20250

OWNERSD25-Apr-202564 32

README.mdD25-Apr-20257.5 KiB191156

cargo_embargo.jsonD25-Apr-2025259 1413

README.md

1# vhost-device-vsock
2
3## Design
4
5The crate introduces a vhost-device-vsock device that enables communication between an
6application running in the guest i.e inside a VM and an application running on the
7host i.e outside the VM. The application running in the guest communicates over VM
8sockets i.e over AF_VSOCK sockets. The application running on the host connects to a
9unix socket on the host i.e communicates over AF_UNIX sockets. The main components of
10the crate are split into various files as described below:
11
12- [packet.rs](src/packet.rs)
13  - Introduces the **VsockPacket** structure that represents a single vsock packet
14  processing methods.
15- [rxops.rs](src/rxops.rs)
16  - Introduces various vsock operations that are enqueued into the rxqueue to be sent to the
17  guest. Exposes a **RxOps** structure.
18- [rxqueue.rs](src/rxqueue.rs)
19  - rxqueue contains the pending rx operations corresponding to that connection. The queue is
20  represented as a bitmap as we handle connection-oriented connections. The module contains
21  various queue manipulation methods. Exposes a **RxQueue** structure.
22- [thread_backend.rs](src/thread_backend.rs)
23  - Multiplexes connections between host and guest and calls into per connection methods that
24  are responsible for processing data and packets corresponding to the connection. Exposes a
25  **VsockThreadBackend** structure.
26- [txbuf.rs](src/txbuf.rs)
27  - Module to buffer data that is sent from the guest to the host. The module exposes a **LocalTxBuf**
28  structure.
29- [vhost_user_vsock_thread.rs](src/vhost_user_vsock_thread.rs)
30  - Module exposes a **VhostUserVsockThread** structure. It also handles new host initiated
31  connections and provides interfaces for registering host connections with the epoll fd. Also
32  provides interfaces for iterating through the rx and tx queues.
33- [vsock_conn.rs](src/vsock_conn.rs)
34  - Module introduces a **VsockConnection** structure that represents a single vsock connection
35  between the guest and the host. It also processes packets according to their type.
36- [vhu_vsock.rs](src/vhu_vsock.rs)
37  - exposes the main vhost user vsock backend interface.
38
39## Usage
40
41Run the vhost-device-vsock device:
42```
43vhost-device-vsock --guest-cid=<CID assigned to the guest> \
44  --socket=<path to the Unix socket to be created to communicate with the VMM via the vhost-user protocol> \
45  --uds-path=<path to the Unix socket to communicate with the guest via the virtio-vsock device> \
46  [--tx-buffer-size=<size of the buffer used for the TX virtqueue (guest->host packets)>] \
47  [--groups=<list of group names to which the device belongs concatenated with '+' delimiter>]
48```
49or
50```
51vhost-device-vsock --vm guest_cid=<CID assigned to the guest>,socket=<path to the Unix socket to be created to communicate with the VMM via the vhost-user protocol>,uds-path=<path to the Unix socket to communicate with the guest via the virtio-vsock device>[,tx-buffer-size=<size of the buffer used for the TX virtqueue (guest->host packets)>][,groups=<list of group names to which the device belongs concatenated with '+' delimiter>]
52```
53
54Specify the `--vm` argument multiple times to specify multiple devices like this:
55```
56vhost-device-vsock \
57--vm guest-cid=3,socket=/tmp/vhost3.socket,uds-path=/tmp/vm3.vsock,groups=group1+groupA \
58--vm guest-cid=4,socket=/tmp/vhost4.socket,uds-path=/tmp/vm4.vsock,tx-buffer-size=32768
59```
60
61Or use a configuration file:
62```
63vhost-device-vsock --config=<path to the local yaml configuration file>
64```
65
66Configuration file example:
67```yaml
68vms:
69    - guest_cid: 3
70      socket: /tmp/vhost3.socket
71      uds_path: /tmp/vm3.sock
72      tx_buffer_size: 65536
73      groups: group1+groupA
74    - guest_cid: 4
75      socket: /tmp/vhost4.socket
76      uds_path: /tmp/vm4.sock
77      tx_buffer_size: 32768
78      groups: group2+groupB
79```
80
81Run VMM (e.g. QEMU):
82
83```
84qemu-system-x86_64 \
85  <normal QEMU options> \
86  -object memory-backend-file,share=on,id=mem0,size=<Guest RAM size>,mem-path=<Guest RAM file path> \ # size == -m size
87  -machine <machine options>,memory-backend=mem0 \
88  -chardev socket,id=char0,reconnect=0,path=<vhost-user socket path> \
89  -device vhost-user-vsock-pci,chardev=char0
90```
91
92## Working example
93
94```sh
95shell1$ vhost-device-vsock --vm guest-cid=4,uds-path=/tmp/vm4.vsock,socket=/tmp/vhost4.socket
96```
97or if you want to configure the TX buffer size
98```sh
99shell1$ vhost-device-vsock --vm guest-cid=4,uds-path=/tmp/vm4.vsock,socket=/tmp/vhost4.socket,tx-buffer-size=65536
100```
101
102```sh
103shell2$ qemu-system-x86_64 \
104          -drive file=vm.qcow2,format=qcow2,if=virtio -smp 2 -m 512M -mem-prealloc \
105          -object memory-backend-file,share=on,id=mem0,size=512M,mem-path="/dev/hugepages" \
106          -machine q35,accel=kvm,memory-backend=mem0 \
107          -chardev socket,id=char0,reconnect=0,path=/tmp/vhost4.socket \
108          -device vhost-user-vsock-pci,chardev=char0
109```
110
111### Guest listening
112
113#### iperf
114
115```sh
116# https://github.com/stefano-garzarella/iperf-vsock
117guest$ iperf3 --vsock -s
118host$  iperf3 --vsock -c /tmp/vm4.vsock
119```
120
121#### netcat
122
123```sh
124guest$ nc --vsock -l 1234
125
126host$  nc -U /tmp/vm4.vsock
127CONNECT 1234
128```
129
130### Host listening
131
132#### iperf
133
134```sh
135# https://github.com/stefano-garzarella/iperf-vsock
136host$  iperf3 --vsock -s -B /tmp/vm4.vsock
137guest$ iperf3 --vsock -c 2
138```
139
140#### netcat
141
142```sh
143host$ nc -l -U /tmp/vm4.vsock_1234
144
145guest$ nc --vsock 2 1234
146```
147
148### Sibling VM communication
149
150If you add multiple VMs with their devices configured with at least one common group name, they can communicate with
151each other. If you don't explicitly specify a group name, a default group will be assigned to the device with name
152`default`, and all such devices will be able to communicate with each other. Or you can choose a different list of
153group names for each device, and only devices with the at least one group in commmon will be able to communicate with
154each other.
155
156For example, if you have two VMs with CID 3 and 4, you can run the following commands to make them communicate:
157
158```sh
159shell1$ vhost-device-vsock --vm guest-cid=3,uds-path=/tmp/vm3.vsock,socket=/tmp/vhost3.socket,groups=group1+group2 \
160          --vm guest-cid=4,uds-path=/tmp/vm4.vsock,socket=/tmp/vhost4.socket,groups=group1
161shell2$ qemu-system-x86_64 \
162          -drive file=vm1.qcow2,format=qcow2,if=virtio -smp 2 -m 512M -mem-prealloc \
163          -object memory-backend-file,share=on,id=mem0,size=512M,mem-path="/dev/hugepages" \
164          -machine q35,accel=kvm,memory-backend=mem0 \
165          -chardev socket,id=char0,reconnect=0,path=/tmp/vhost3.socket \
166          -device vhost-user-vsock-pci,chardev=char0
167shell3$ qemu-system-x86_64 \
168          -drive file=vm2.qcow2,format=qcow2,if=virtio -smp 2 -m 512M -mem-prealloc \
169          -object memory-backend-file,share=on,id=mem0,size=512M,mem-path="/dev/hugepages2" \
170          -machine q35,accel=kvm,memory-backend=mem0 \
171          -chardev socket,id=char0,reconnect=0,path=/tmp/vhost4.socket \
172          -device vhost-user-vsock-pci,chardev=char0
173```
174
175Please note that here the `groups` parameter is specified just for clarity, but it is not necessary to specify it if you want
176to use the default group and make all the devices communicate with one another. It is useful to specify a list of groups
177when you want fine-grained control over which devices can communicate with each other.
178
179```sh
180# nc-vsock patched to set `.svm_flags = VMADDR_FLAG_TO_HOST`
181guest_cid3$ nc-vsock -l 1234
182guest_cid4$ nc-vsock 3 1234
183```
184
185## License
186
187This project is licensed under either of
188
189- [Apache License](http://www.apache.org/licenses/LICENSE-2.0), Version 2.0
190- [BSD-3-Clause License](https://opensource.org/licenses/BSD-3-Clause)
191