• R/O
  • SSH

Commit

Tags
Keine Tags

Frequently used words (click to add to your profile)

javac++androidlinuxc#windowsobjective-ccocoa誰得qtpythonphprubygameguibathyscaphec計画中(planning stage)翻訳omegatframeworktwitterdomtestvb.netdirectxゲームエンジンbtronarduinopreviewer

Commit MetaInfo

Revisionbae8bb6bc698192bbb3eca5810776a9c86192817 (tree)
Zeit2022-07-12 20:11:43
AutorAlbert Mietus < albert AT mietus DOT nl >
CommiterAlbert Mietus < albert AT mietus DOT nl >

Log Message

Started with (8) BusyCores-concepts (analyse)

Ändern Zusammenfassung

Diff

diff -r 8e890803ce08 -r bae8bb6bc698 CCastle/2.Analyse/8.BusyCores-concepts.rst
--- a/CCastle/2.Analyse/8.BusyCores-concepts.rst Fri Jul 08 13:15:17 2022 +0200
+++ b/CCastle/2.Analyse/8.BusyCores-concepts.rst Tue Jul 12 13:11:43 2022 +0200
@@ -2,9 +2,9 @@
22
33 .. _BC-concepts:
44
5-=======================
6-Concepts for Busy Cores
7-=======================
5+=============================
6+Concurrent Computing Concepts
7+=============================
88
99 .. post::
1010 :category: Castle DesignStudy
@@ -20,61 +20,92 @@
2020 As Castle is a new language we have the opportunity to select such a concept and incorporate it into the language ...
2121
2222 In this blog, we explore a bit of theory. I will focus on semantics and the possibilities to implement them
23- efficiently. The syntactic details come later.
23+ efficiently. The exact syntax will come later.
2424
25+Basic terminology
26+=================
27+
28+As there is many theory available and even more practical expertise, but a limited set of “common words”, let describe
29+some basic terms. As always, we use Wikipedia as common ground, and add links for a deep-dive.
30+
31+.. include:: BusyCores-sidebar-concurrency.irst
2532
2633
2734
2835 TODO
29-****
36+************************************************************
3037
31-.. todo:: All below is draft and needs work
38+.. todo:: All below is draft and needs work!!!!
3239
3340
34-----------
35-
36-CUT & PAST
37-
38-----------
39-
40-Some concepts
41-=============
42-
43-Before we dive into the needs for Castle, lets define --shortly-- the available, theoretical concepts. Routinely, we add
44-wikipedia links for a deep-dive.
45-
46-.. include:: BusyCores-sidebar-concurrency.irst
4741
4842 Concurrency
4943 -----------
50-Concurrency_ is the ability to “compute” multiple things at the same time, instead of doing them one after the other. It requires another mindset, but isn’t that complicated.
51-A typical example is a loop: suppose we have a sequence of numbers and we like to compute the square of each one. Most developers will loop over those numbers, get one number, calculate the square, store it in another list, and continue with the next element. It works, but we have also instructed the computer to do it in sequence — especially when the task is bit more complicated, the compiler does know whether the ‘next task’ depends on the current one, and can’t optimise it.
5244
53-A better plan is to tell the compiler about the tasks; most are independently: square a number. There is also one that has to be run at the end: combine the results into a new list. And one is bit funny: distribute the sequence-elements over the “square-tasks” — clearly, one has to start with this one, but it can be concurrent with many others too.
45+Concurrency_ is the ability to “compute” multiple things at the same time.
46+|BR|
47+Designing concurrent software isn’t that complicated but; demands another mindset the when we write software that does
48+one thing afer the other.
49+
50+A typical example is a loop: suppose we have a sequence of numbers and we like to compute the square of each one. Most
51+developers will loop over those numbers, get one number, calculate the square, store it in another list, and continue.
52+It works, but we have also instructed the computer to do it in sequence — especially when the
53+task is bit more complicated, the compiler does know whether the ‘next task’ depends on the current one, and can’t
54+optimise it.
55+
56+A better plan is to tell the compiler about the tasks; most are independently: square a number. There is also one that
57+has to be run at the end: combine the results into a new list. And one is bit funny: distribute the sequence-elements
58+over the “square-tasks” — clearly, one has to start with this one, but it can be concurrent with many others too.
59+|BR|
60+This is *not* a parallel algorithm. When not specifying the order, we allow parallel execution. We do not demand it,
61+sequential execution is allowed too.
62+
5463
5564
5665 Parallelisme
5766 ------------
58-Parallelisme_ is about executing multiple tasks (apparently) at the same time. We will focus running multiple
59-concurrent task (of the same program) on as many cores as possible. And when we assume we have a thousand cores we need
60-(at least) a thousand independent tasks — at any moment— to gain maximal speed up. This is not trivial!
67+
68+Parallelisme_ is about executing multiple tasks (apparently) at the same time. We will focus running multiple concurrent
69+task (of the same program) on “as many cores as possible”. When we assume a thousand cores, we need a thousand
70+independent tasks (at least) to gain maximal speed up. A thousand at any moment!
6171 |BR|
62-It’s not only about doing a thousand things at the same time (that is not to complicated, for a computer), but also — probably: mostly — about finishing a thousand times faster…
72+It’s not only about doing a thousand things at the same time (that is not to complicated, for a computer), but also —
73+probably: mostly — about finishing a thousand times faster…
6374
6475 With many cores, multiple program-steps can be executed at the same time: from changing the same variable, acces the
6576 same memory, or compete for new memory. And when solving that, we introduce new hazards: like deadlocks_ and even
6677 livelocks_.
6778
68-Locking
69-
70-
7179
7280 Distributed
7381 -----------
82+
7483 A special form of parallelisme is Distributed-Computing_: compute on many computers. Many experts consider this
7584 as an independent field of expertise; still --as Multi-Core_ is basically “many computers on a chips”-- its there is an
76-analogue [#DistributedDiff]_, and we should the know-how that is available there to design out “best ever language”.
85+analogy [#DistributedDiff]_, and we should use the know-how that is available, to design out “best ever language”.
7786
87+Messages & shared-data
88+----------------------
89+
90+Communication between two (concurrent) tasks (or processes, CPUs, computers) needs the passing of data (in one or two
91+direction). Roughly, there are two ways to do so:
92+
93+Shared-Data
94+ Memory (variables) that can be written and/or read by both. As the acces is typical not acces, a bit of
95+
96+
97+Controll
98+========
99+
100+Models
101+======
102+
103+Probably the oldest model to described concurrency is the Petri-net_; which has the disadvantage that is synchronous
104+(all tokens move at the same timeslot) -- which is a hard to implement (efficiently) on Multi-Core_.
105+
106+Actors
107+
108+A very introduce
78109
79110 --------
80111
@@ -104,3 +135,8 @@
104135 .. _livelocks: https://en.wikipedia.org/wiki/Deadlock#Livelock
105136 .. _Critical-Sections: https://en.wikipedia.org/wiki/Critical_section
106137 .. _Distributed-Computing: https://en.wikipedia.org/wiki/Distributed_computing
138+
139+.. _Actor-Model: https://en.wikipedia.org/wiki/Actor_model
140+.. _Actor-Model-Theory: https://en.wikipedia.org/wiki/Actor_model_theory
141+
142+.. _Petri-Net: https://en.wikipedia.org/wiki/Petri_net