Revision | 1ef275296147c4435582ce90ab4e20f942f5c135 (tree) |
---|---|
Zeit | 2022-09-10 21:46:26 |
Autor | Albert Mietus < albert AT mietus DOT nl > |
Commiter | Albert Mietus < albert AT mietus DOT nl > |
AsIs
@@ -8,7 +8,7 @@ | ||
8 | 8 | |
9 | 9 | .. post:: |
10 | 10 | :category: Castle DesignStudy |
11 | - :tags: Castle, Concurrency, DRAFT | |
11 | + :tags: Castle, Concurrency, DRAFT§ | |
12 | 12 | |
13 | 13 | Sooner as we may realize even embedded systems will have many, many cores; as I described in |
14 | 14 | “:ref:`BusyCores`”. Castle should make it easy to write code for all of them: not to keep them busy, but to maximize |
@@ -23,7 +23,7 @@ | ||
23 | 23 | efficiently. The exact syntax will come later. |
24 | 24 | |
25 | 25 | Basic terminology |
26 | -***************** | |
26 | +================= | |
27 | 27 | |
28 | 28 | There are many theories available and some more practical expertise but they hardly share a common vocabulary. |
29 | 29 | For that reason, let’s describe some basic terms, that will be used in these blogs. As always, we use Wikipedia as common |
@@ -35,7 +35,7 @@ | ||
35 | 35 | .. include:: CCC-sidebar-concurrency.irst |
36 | 36 | |
37 | 37 | Concurrent |
38 | -========== | |
38 | +---------- | |
39 | 39 | |
40 | 40 | Concurrency_ is the **ability** to “compute” multiple *tasks* at the same time. |
41 | 41 | |BR| |
@@ -56,7 +56,7 @@ | ||
56 | 56 | |
57 | 57 | |
58 | 58 | Parallelism |
59 | -=========== | |
59 | +----------- | |
60 | 60 | |
61 | 61 | Parallelism_ is about executing multiple tasks (seemingly) at the same time. We will on focus running many multiple |
62 | 62 | concurrent tasks (of the same program) on *“as many cores as possible”*. When we assume a thousand cores, we need a |
@@ -71,16 +71,17 @@ | ||
71 | 71 | |
72 | 72 | |
73 | 73 | Distributed |
74 | ------------ | |
74 | +~~~~~~~~~~~ | |
75 | 75 | |
76 | 76 | A special form of parallelism is Distributed-Computing_: computing on many computers. Many experts consider this |
77 | 77 | an independent field of expertise. Still --as Multi-Core_ is basically “many computers on a chip”-- it’s an |
78 | 78 | available, adjacent [#DistributedDiff]_ theory, and we should use it, to design our “best ever language”. |
79 | 79 | |
80 | + | |
80 | 81 | .. include:: CCC-sidebar-CS.irst |
81 | 82 | |
82 | -Efficient Communication | |
83 | -*********************** | |
83 | +Communication Efficiently | |
84 | +========================= | |
84 | 85 | |
85 | 86 | When multiple tasks run concurrently, they have to communicate to pass data and control progress. Unlike in a |
86 | 87 | sequential program -- where the control is trivial, as is sharing data-- this needs a bit of extra effort. |
@@ -94,7 +95,7 @@ | ||
94 | 95 | |
95 | 96 | |
96 | 97 | Shared Memory |
97 | -============= | |
98 | +------------- | |
98 | 99 | |
99 | 100 | In this model all tasks (usually threads or processes) have some shared/common memory; typically “variables”. As the access |
100 | 101 | is asynchronous, the risk exists the data is updated “at the same time” by two or more tasks. This can lead to invalid |
@@ -110,7 +111,7 @@ | ||
110 | 111 | |
111 | 112 | |
112 | 113 | Messages |
113 | -======== | |
114 | +-------- | |
114 | 115 | |
115 | 116 | A more modern approach is Message-Passing_: a task sends some information to another; this can be a message, some data, |
116 | 117 | or an event. In all cases, there is a distinct sender and receiver -- and apparently no common/shared memory-- so no |
@@ -133,16 +134,19 @@ | ||
133 | 134 | |BR| |
134 | 135 | Notice: As the compiler will insert the (low level) Semaphores_, the risk that a developer forgets one is gone! |
135 | 136 | |
137 | +.. _MPA: | |
136 | 138 | |
137 | 139 | Messaging Aspects |
138 | ------------------ | |
140 | +================= | |
139 | 141 | |
140 | 142 | There are many variant on messaging, mostly combinations some fundamental aspects. Let mentions some basic ones. |
143 | +|BR| In :ref:`MPA-examples` some existing messaging passing systems are classified in those therms, for those that do | |
144 | +prefer a more practical characterisation. | |
141 | 145 | |
142 | 146 | .. include:: CCC-sidebar-async.irst |
143 | 147 | |
144 | 148 | (A)Synchronous |
145 | -~~~~~~~~~~~~~~ | |
149 | +-------------- | |
146 | 150 | |
147 | 151 | **Synchronous** messages resembles normal function-calls. Typically a “question” is send, the call awaits the |
148 | 152 | answer-messages, and that answer is returned. This can be seen as a layer on top of the more fundamental send/receive |
@@ -159,7 +163,7 @@ | ||
159 | 163 | |
160 | 164 | |
161 | 165 | (Un)Buffered |
162 | -~~~~~~~~~~~~ | |
166 | +------------ | |
163 | 167 | |
164 | 168 | Despide it’s is not truly a characteristic of the messages itself, messages can be *buffered*, or not. It is about |
165 | 169 | piping, transporting the message: can this “connection” (see below) *contain/save/store* messages? When there is no |
@@ -171,7 +175,7 @@ | ||
171 | 175 | Note: this is always asymmetric; messages need to be send before the can be read. |
172 | 176 | |
173 | 177 | Connected Channels (or not) |
174 | -~~~~~~~~~~~~~~~~~~~~~~~~~~~~ | |
178 | +--------------------------- | |
175 | 179 | |
176 | 180 | Messages can be send over (pre-) *connected channels* or to freely addressable end-points. Some people use the term “connection |
177 | 181 | oriented” for those connected-channels, others use the term “channel” more generic and for any medium that is |
@@ -190,9 +194,8 @@ | ||
190 | 194 | number of channels). |
191 | 195 | |
192 | 196 | |
193 | - | |
194 | 197 | (Non-) Blocking |
195 | -~~~~~~~~~~~~~~~ | |
198 | +--------------- | |
196 | 199 | |
197 | 200 | Both the writer and the reader can be *blocking* (or not); which is a facet of the function-call. A blocking reader it |
198 | 201 | will always return when a messages is available -- and will pauze until then. |
@@ -205,16 +208,15 @@ | ||
205 | 208 | as well. |
206 | 209 | |
207 | 210 | |
208 | - | |
209 | 211 | Uni/Bi-Directional, Broadcast |
210 | -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ | |
212 | +----------------------------- | |
211 | 213 | |
212 | 214 | Messages --or actually the channel [#channelDir]_ that transport them-- can be *unidirectional*: from sender to receiver only; |
213 | 215 | *bidirectional*: both sides can send and receive; or *broadcasted*: one message is send to many receivers [#anycast]_. |
214 | 216 | |
215 | 217 | |
216 | 218 | Reliability & Order |
217 | -~~~~~~~~~~~~~~~~~~~ | |
219 | +------------------- | |
218 | 220 | |
219 | 221 | Especially when studying “network messages”, we have to consider Reliability_ too. Many developers assume that a send |
220 | 222 | message is always received and that when multiple messages are sent, they are received in the same order. In most |
@@ -271,37 +273,13 @@ | ||
271 | 273 | Then, a *faster* conversation with a bit of noise is commonly preferred.a |
272 | 274 | |
273 | 275 | |
274 | -Some examples | |
275 | -------------- | |
276 | - | |
277 | -In the section below, we mention a few, everyday message-passing systems, to shed light on the theoretical features. | |
276 | +------------------------ | |
278 | 277 | |
279 | -Pipes | |
280 | -~~~~~ | |
281 | - | |
282 | -The famous *Unix Pipes* are unidirectional, reliable, blocking, asynchronous, buffered, non-networking **data-only** | |
283 | -messages. The (“stdout”) output of one process is fed as input to (one) other process. It’s data only, in one direction | |
284 | --- but the controll can in two directions: when the second (receiving) process can’t process the data (and the buffers | |
285 | -becoming full), the first process can be slowed down (although this a not well know feature). | |
286 | - | |
287 | -It’s also an example of a quite implicit channel: the programmer (of both programs) have nothing (to little) to do | |
288 | -extra, to make it possible. | |
278 | +.. todo:: All below is draft and needs work!!!! | |
289 | 279 | |
290 | 280 | |
291 | - | |
292 | ------------------------- | |
293 | - | |
294 | -.. todo:: | |
295 | - | |
296 | - | |
297 | - * Pipe : kind of data messages | |
298 | - | |
299 | - | |
300 | - .. todo:: All below is draft and needs work!!!! | |
301 | - | |
302 | - | |
303 | -Models | |
304 | -****** | |
281 | +Process calculus | |
282 | +================ | |
305 | 283 | |
306 | 284 | Probably the oldest model to described concurrency is the |
307 | 285 | (all tokens move at the same timeslot) -- which is a hard to implement (efficiently) on Multi-Core_. |
@@ -379,3 +357,4 @@ | ||
379 | 357 | .. _RPC: https://en.wikipedia.org/wiki/Remote_procedure_call |
380 | 358 | .. _Broadcasting: https://en.wikipedia.org/wiki/Broadcasting_(networking) |
381 | 359 | .. _Reliability: https://en.wikipedia.org/wiki/Reliability_(computer_networking) |
360 | +.. _Process-Calculus: https://en.wikipedia.org/wiki/Process_calculus |
@@ -0,0 +1,27 @@ | ||
1 | +.. _MPA-examples: | |
2 | + | |
3 | +Everyday Message Passing examples (ToDo) | |
4 | +======================================== | |
5 | + | |
6 | +In :ref:`ConcurrentComputingConcepts` we have catalogued some :ref:`MPA` quite shortly. As a kind of addendum, we show a few well known message-passing systems, to shed some light on those theoretical features in this article. | |
7 | + | |
8 | +Pipes | |
9 | +===== | |
10 | + | |
11 | +The famous *Unix Pipes* are unidirectional, reliable, blocking, asynchronous, buffered, non-networking **data-only** | |
12 | +messages. The (“stdout”) output of one process is fed as input to (one) other process. It’s data only, in one direction | |
13 | +-- but the controll can in two directions: when the second (receiving) process can’t process the data (and the buffers | |
14 | +becoming full), the first process can be slowed down (although this a not well know feature). | |
15 | + | |
16 | +It’s also an example of a quite implicit channel: the programmer (of both programs) have nothing (to little) to do | |
17 | +extra, to make it possible. | |
18 | + | |
19 | +DDS | |
20 | +=== | |
21 | + | |
22 | ||
23 | +====== | |
24 | + | |
25 | +(BSD) Sockets | |
26 | +============= | |
27 | + |