-
Notifications
You must be signed in to change notification settings - Fork 0
/
Copy pathfeed.xml
executable file
·279 lines (218 loc) · 55.5 KB
/
feed.xml
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
<?xml version="1.0" encoding="utf-8"?><feed xmlns="http://www.w3.org/2005/Atom" ><generator uri="https://jekyllrb.com/" version="4.1.1">Jekyll</generator><link href="http://0.0.0.0:4000/feed.xml" rel="self" type="application/atom+xml" /><link href="http://0.0.0.0:4000/" rel="alternate" type="text/html" /><updated>2020-09-25T23:06:22+00:00</updated><id>http://0.0.0.0:4000/feed.xml</id><title type="html">BLog</title><subtitle>Bin Liang's Tech Blog</subtitle><author><name>liangbin</name></author><entry xml:lang="en"><title type="html">Introduction To Zeta</title><link href="http://0.0.0.0:4000/portfolio/posts/introduction-to-zeta" rel="alternate" type="text/html" title="Introduction To Zeta" /><published>2020-09-25T00:00:00+00:00</published><updated>2020-09-25T00:00:00+00:00</updated><id>http://0.0.0.0:4000/portfolio/posts/Introduction-To-Zeta</id><content type="html" xml:base="http://0.0.0.0:4000/portfolio/posts/introduction-to-zeta"><h2 id="looking-forward-to-see-the-birth-of-zeta">Looking forward to see the birth of Zeta</h2>
<p><a href="/portfolio/os_projects/zeta">A new Open Source Networking Service Gateway and Platform for Private DC and Public Cloud</a></p></content><author><name>Bin Liang</name><email>[email protected]</email></author><category term="Networked Compute and Storage" /><category term="Technology" /><category term="Solution" /><summary type="html">Looking forward to see the birth of Zeta A new Open Source Networking Service Gateway and Platform for Private DC and Public Cloud</summary><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="http://0.0.0.0:4000/assets/img/software.jpg" /><media:content medium="image" url="http://0.0.0.0:4000/assets/img/software.jpg" xmlns:media="http://search.yahoo.com/mrss/" /></entry><entry xml:lang="en"><title type="html">Gmail Aliasing</title><link href="http://0.0.0.0:4000/portfolio/posts/gmail-aliasing" rel="alternate" type="text/html" title="Gmail Aliasing" /><published>2020-08-28T00:00:00+00:00</published><updated>2020-08-28T00:00:00+00:00</updated><id>http://0.0.0.0:4000/portfolio/posts/Gmail-Aliasing</id><content type="html" xml:base="http://0.0.0.0:4000/portfolio/posts/gmail-aliasing"><p>Email aliasing is a good solution for online privacy and email organization without using multiple Email accounts.</p>
<p>Yahoo mail alias can be configured directly in setup, but Gmail alias seems no where to be found.</p>
<p>Here is a hidden feature I found which realizes Gmail aliasing in an interesting way:</p>
<p><strong>Appending ANY suffix with “+” to your Gmail ID effectively creates an alias</strong></p>
<p>For example, assume my Gmail address is [email protected], then I can use following Gmail addresses as alias of my main account:</p>
<ul>
<li>[email protected]</li>
<li>[email protected]</li>
</ul>
<p>Try it and enjoy!</p></content><author><name>Bin Liang</name><email>[email protected]</email></author><category term="Exploration" /><category term="Practice" /><summary type="html">Email aliasing is a good solution for online privacy and email organization without using multiple Email accounts. Yahoo mail alias can be configured directly in setup, but Gmail alias seems no where to be found. Here is a hidden feature I found which realizes Gmail aliasing in an interesting way: Appending ANY suffix with “+” to your Gmail ID effectively creates an alias For example, assume my Gmail address is [email protected], then I can use following Gmail addresses as alias of my main account: [email protected] [email protected] Try it and enjoy!</summary><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="http://0.0.0.0:4000/assets/img/software.jpg" /><media:content medium="image" url="http://0.0.0.0:4000/assets/img/software.jpg" xmlns:media="http://search.yahoo.com/mrss/" /></entry><entry xml:lang="en"><title type="html">Fugaku TofuD Interconnect Study</title><link href="http://0.0.0.0:4000/portfolio/posts/fugaku-tofud-interconnect-study" rel="alternate" type="text/html" title="Fugaku TofuD Interconnect Study" /><published>2020-06-25T00:00:00+00:00</published><updated>2020-06-25T00:00:00+00:00</updated><id>http://0.0.0.0:4000/portfolio/posts/Fugaku-TofuD-Interconnect-Study</id><content type="html" xml:base="http://0.0.0.0:4000/portfolio/posts/fugaku-tofud-interconnect-study"><h2 id="fugaku-tofud-interconnect-study">Fugaku TofuD Interconnect Study</h2>
<p>Draft – Coming soon!</p></content><author><name>Bin Liang</name><email>[email protected]</email></author><category term="Networked Compute and Storage" /><category term="Insight" /><category term="Technology" /><category term="Solution" /><summary type="html">Fugaku TofuD Interconnect Study Draft – Coming soon!</summary><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="http://0.0.0.0:4000/assets/img/software.jpg" /><media:content medium="image" url="http://0.0.0.0:4000/assets/img/software.jpg" xmlns:media="http://search.yahoo.com/mrss/" /></entry><entry xml:lang="en"><title type="html">Fugaku is the new King</title><link href="http://0.0.0.0:4000/portfolio/posts/fugaku-is-the-new-king" rel="alternate" type="text/html" title="Fugaku is the new King" /><published>2020-06-23T00:00:00+00:00</published><updated>2020-06-23T00:00:00+00:00</updated><id>http://0.0.0.0:4000/portfolio/posts/Fugaku-Is-The-New-King</id><content type="html" xml:base="http://0.0.0.0:4000/portfolio/posts/fugaku-is-the-new-king"><h2 id="fugaku-is-the-new-king">Fugaku is the new King</h2>
<p>The biggest <strong><em>GOOD</em></strong> news in IT industry this year must be the announcement of Fugaku supercomputer in earlier June, taking the top spot on the Top500 list, a ranking of the world’s fastest supercomputers. It also swept the other rankings of supercomputer performance, taking first place on the HPCG, a ranking of supercomputers running real-world applications, HPL-AI, which ranks supercomputers based on their performance capabilities for tasks typically used in artificial intelligence applications, and Graph 500, which ranks systems based on data-intensive loads. This is the first time in history that the same supercomputer has become No.1 on Top500, HPCG, and Graph500 simultaneously.</p>
<p><a href="/assets/img/fugaku.jpeg"></a>
Why am I so excited about Fugaku and use this news as my opening post for a new HPC series?</p>
<p>The concept of Fugaku is proposed 10 years ago and the project started around 2014. Its concept is not brand new but significant design and packaging improvements are applied over its predecessors. These changes and improvements reflected some of most heated debate/discussions in HPC industry and can serve very well as testimony in these topics.</p>
<p>A few key differentiation factors of interest are:</p>
<ul>
<li>ARM v8 SVE instruction set vs Fujitsu’s traditional SPARC solution vs Intel x86</li>
<li>48 low Freq/low power cores per CPU vs less core with higher Freq/power per CPU</li>
<li>HBM vs DDR4/GDDR6</li>
<li>6D Mesh/Torus interconnect topology vs Blue Gene’s 5D Torus vs Summit’s Fat-Tree</li>
<li>3D stacked memory vs integrated memory on chip</li>
</ul>
<p>Why am I even interested about HPC/Supercomputer, as a networking (in macro way) guy?</p>
<ol>
<li>Supercomputer/HPC is technology driver for computing industry, especially in packaging and interconnect.</li>
<li>Converged compute, storage and interconnect co-design is deep in its gene for supercomputer/HPC</li>
<li>Boundary between HPC and non-HPC applications has blurred in recent years, especially with AI and IoT applications, making HPC solutions very relevant to general purpose compute infrastructure like DC and edge.</li>
</ol>
<p>In the following posts in this series, I will dig deeper into above areas and reflect the knowledge into the more generic NCS discussions, stay tuned friends!</p></content><author><name>Bin Liang</name><email>[email protected]</email></author><category term="Networked Compute and Storage" /><category term="Insight" /><category term="Technology" /><category term="Solution" /><summary type="html">Fugaku is the new King The biggest GOOD news in IT industry this year must be the announcement of Fugaku supercomputer in earlier June, taking the top spot on the Top500 list, a ranking of the world’s fastest supercomputers. It also swept the other rankings of supercomputer performance, taking first place on the HPCG, a ranking of supercomputers running real-world applications, HPL-AI, which ranks supercomputers based on their performance capabilities for tasks typically used in artificial intelligence applications, and Graph 500, which ranks systems based on data-intensive loads. This is the first time in history that the same supercomputer has become No.1 on Top500, HPCG, and Graph500 simultaneously. Why am I so excited about Fugaku and use this news as my opening post for a new HPC series? The concept of Fugaku is proposed 10 years ago and the project started around 2014. Its concept is not brand new but significant design and packaging improvements are applied over its predecessors. These changes and improvements reflected some of most heated debate/discussions in HPC industry and can serve very well as testimony in these topics. A few key differentiation factors of interest are: ARM v8 SVE instruction set vs Fujitsu’s traditional SPARC solution vs Intel x86 48 low Freq/low power cores per CPU vs less core with higher Freq/power per CPU HBM vs DDR4/GDDR6 6D Mesh/Torus interconnect topology vs Blue Gene’s 5D Torus vs Summit’s Fat-Tree 3D stacked memory vs integrated memory on chip Why am I even interested about HPC/Supercomputer, as a networking (in macro way) guy? Supercomputer/HPC is technology driver for computing industry, especially in packaging and interconnect. Converged compute, storage and interconnect co-design is deep in its gene for supercomputer/HPC Boundary between HPC and non-HPC applications has blurred in recent years, especially with AI and IoT applications, making HPC solutions very relevant to general purpose compute infrastructure like DC and edge. In the following posts in this series, I will dig deeper into above areas and reflect the knowledge into the more generic NCS discussions, stay tuned friends!</summary><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="http://0.0.0.0:4000/assets/img/software.jpg" /><media:content medium="image" url="http://0.0.0.0:4000/assets/img/software.jpg" xmlns:media="http://search.yahoo.com/mrss/" /></entry><entry xml:lang="en"><title type="html">Reflection on Data Fidelity</title><link href="http://0.0.0.0:4000/portfolio/posts/reflection-on-data-fidelity" rel="alternate" type="text/html" title="Reflection on Data Fidelity" /><published>2020-06-20T00:00:00+00:00</published><updated>2020-06-20T00:00:00+00:00</updated><id>http://0.0.0.0:4000/portfolio/posts/Reflection-On-Data-Fidelity</id><content type="html" xml:base="http://0.0.0.0:4000/portfolio/posts/reflection-on-data-fidelity"><blockquote>
<p>“To be, or not to be? That is the question”<br />
― <em>William Shakespeare, Hamlet</em></p>
</blockquote>
<h2 id="reflection-on-data-fidelity">Reflection on Data Fidelity</h2>
<p>Is data communication or more specific, computer networking, science or art?
Is the world analog or digital?</p>
<p>Ever since computer started taking charge of our life, everything started the transformation of digitization, sooner or later. It makes everything easier to quantify, measure, compute and transport and is truly the foundation for the information revolution.</p>
<p>Many years of engineering training and over 20 years of professional practice in data communication industry, it has become my second nature to think and view things in 0s and 1s, black or white, right or wrong. Everything becomes easier and manageable this way and you can really feel the power to change the world. As we immersing deeper and deeper into this digital world we help created, I realized one day that something is not right, I started to get confused about the truth vs lie, fact vs fiction, and more importantly, purpose vs meaning.</p>
<p>Rethinking what is the purpose of data communication, is it really just sending data from A to B accurately, efficiently and in an timely manner?</p>
<p>I would say that’s mostly (will elaborate more later) true among digital computers, but surely not totally true for human beings.</p>
<p>For human beings, digitalization is never by nature. Our brains doesn’t work like digital computer and our perception system is NOT in binary. Our sensing system works continuously within its functional spectrum, our intelligence system takes infinite inputs and everyone evolves into unique beautiful beings. One of my hobby is working on my HiFi system and one of lessons I learnt from it is that digital processing and amplification does create lots of fun and flexibility, it <strong>NEVER</strong> sounds as nature as an analog system. 0s and 1s does reduces ambiguity within the player/receiver/amp but it’s not the fidelity we human being want as listener. From this example, clearly we can see the digital data communication system we use today has <strong>three</strong> fundamental functions:</p>
<ol>
<li>A/D and D/A transformation at input and output to inject the digital system into a naturally analog world</li>
<li>Digital processing, such as encapsulation, multiplexing, encoding etc</li>
<li>Recording or transportation</li>
</ol>
<p>Traditionally, data communication industry focuses on function 2 and 3 above and leaves function 1 to human-machine interface devices like computer and mobile devices. Taking the HiFi case as an example, function 1 decides the fidelity of music we hear out of the system no matter how well we do in function 2 and 3. As a matter of fact, regarding to fidelity, the best we can do in function 2 and 3 is to reduce the degradation to <strong>ZERO</strong>. Yeah, that is pathetic but it’s exactly the problem we have in this industry. For too long we have buried our head in sand building pipes, eventually we lost our sight for the purpose of communication. We struggle on pushing towards Shannon’s Theorem limit, building more and more, fatter and fatter data pipes, from deep sea to outer space but still, we see no future to meet the mounting demands.</p>
<p>It’s time to drop the wrench and rethink hard on the purpose for communication, which fidelity we should be focusing on now. Here are a few questions worth asking ourselves:</p>
<ol>
<li>Garbage in, garbage out - should we care <strong>A LOT</strong> on what to communicate, aka <strong>the input</strong>, in stead of just making sure transport “garbages” in high fidelity?</li>
<li>Do we communicate the **RIGHT” information, no more and no less, over the communication system? Instead of trying to beat the Shannon limit, shouldn’t we change the battle field rethinking what should be the “Information” we feed into the pipes as a better approach?</li>
<li>Can we make the <strong>RIGHT</strong> communication choice when communication system is peripheralized from the rest of information processing systems, namely compute and storage?</li>
</ol>
<p>Think of an interactive Holographic Telepresence scheme, which transports only 100s Mbps meta data in stead of 100-1000s Gbps encoded 3D data, that is 3-4 orders of magnitude reduction in both bandwidth and latency demand to the communication system, making it a much achievable goal.</p>
<p>This is a challenging time for networking, but it’s also a perfect time for breaking away: Welcome to an era of converged compute, storage and networking, friend!</p></content><author><name>Bin Liang</name><email>[email protected]</email></author><category term="Networked Compute and Storage" /><category term="Insight" /><category term="Technology" /><category term="Solution" /><summary type="html">“To be, or not to be? That is the question” ― William Shakespeare, Hamlet Reflection on Data Fidelity Is data communication or more specific, computer networking, science or art? Is the world analog or digital? Ever since computer started taking charge of our life, everything started the transformation of digitization, sooner or later. It makes everything easier to quantify, measure, compute and transport and is truly the foundation for the information revolution. Many years of engineering training and over 20 years of professional practice in data communication industry, it has become my second nature to think and view things in 0s and 1s, black or white, right or wrong. Everything becomes easier and manageable this way and you can really feel the power to change the world. As we immersing deeper and deeper into this digital world we help created, I realized one day that something is not right, I started to get confused about the truth vs lie, fact vs fiction, and more importantly, purpose vs meaning. Rethinking what is the purpose of data communication, is it really just sending data from A to B accurately, efficiently and in an timely manner? I would say that’s mostly (will elaborate more later) true among digital computers, but surely not totally true for human beings. For human beings, digitalization is never by nature. Our brains doesn’t work like digital computer and our perception system is NOT in binary. Our sensing system works continuously within its functional spectrum, our intelligence system takes infinite inputs and everyone evolves into unique beautiful beings. One of my hobby is working on my HiFi system and one of lessons I learnt from it is that digital processing and amplification does create lots of fun and flexibility, it NEVER sounds as nature as an analog system. 0s and 1s does reduces ambiguity within the player/receiver/amp but it’s not the fidelity we human being want as listener. From this example, clearly we can see the digital data communication system we use today has three fundamental functions: A/D and D/A transformation at input and output to inject the digital system into a naturally analog world Digital processing, such as encapsulation, multiplexing, encoding etc Recording or transportation Traditionally, data communication industry focuses on function 2 and 3 above and leaves function 1 to human-machine interface devices like computer and mobile devices. Taking the HiFi case as an example, function 1 decides the fidelity of music we hear out of the system no matter how well we do in function 2 and 3. As a matter of fact, regarding to fidelity, the best we can do in function 2 and 3 is to reduce the degradation to ZERO. Yeah, that is pathetic but it’s exactly the problem we have in this industry. For too long we have buried our head in sand building pipes, eventually we lost our sight for the purpose of communication. We struggle on pushing towards Shannon’s Theorem limit, building more and more, fatter and fatter data pipes, from deep sea to outer space but still, we see no future to meet the mounting demands. It’s time to drop the wrench and rethink hard on the purpose for communication, which fidelity we should be focusing on now. Here are a few questions worth asking ourselves: Garbage in, garbage out - should we care A LOT on what to communicate, aka the input, in stead of just making sure transport “garbages” in high fidelity? Do we communicate the **RIGHT” information, no more and no less, over the communication system? Instead of trying to beat the Shannon limit, shouldn’t we change the battle field rethinking what should be the “Information” we feed into the pipes as a better approach? Can we make the RIGHT communication choice when communication system is peripheralized from the rest of information processing systems, namely compute and storage? Think of an interactive Holographic Telepresence scheme, which transports only 100s Mbps meta data in stead of 100-1000s Gbps encoded 3D data, that is 3-4 orders of magnitude reduction in both bandwidth and latency demand to the communication system, making it a much achievable goal. This is a challenging time for networking, but it’s also a perfect time for breaking away: Welcome to an era of converged compute, storage and networking, friend!</summary><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="http://0.0.0.0:4000/assets/img/software.jpg" /><media:content medium="image" url="http://0.0.0.0:4000/assets/img/software.jpg" xmlns:media="http://search.yahoo.com/mrss/" /></entry><entry xml:lang="en"><title type="html">Reflection on Network Intelligence</title><link href="http://0.0.0.0:4000/portfolio/posts/reflection-on-network-intelligence" rel="alternate" type="text/html" title="Reflection on Network Intelligence" /><published>2020-06-11T00:00:00+00:00</published><updated>2020-06-11T00:00:00+00:00</updated><id>http://0.0.0.0:4000/portfolio/posts/Reflection-On-Network-Intelligence</id><content type="html" xml:base="http://0.0.0.0:4000/portfolio/posts/reflection-on-network-intelligence"><h2 id="reflection-on-network-intelligence">Reflection on Network Intelligence</h2>
<p>Remember in my early days in networking industry, Cisco used to fight against Microsoft regarding where the network intelligence should be implemented. Microsoft at the time dominated the PC market and was pushing network features at end-point such as security, acceleration, QoS etc. Network vendors led by Cisco on the other hand wanted to weave most if not all network intelligence into network itself. As we know up till now, similar fights have never ended, just with player and battle fields changes. Unfortunately, this left us some confusing products and terms we still have to deal with up till now, such as UTM devices in the network but truly not a networking device, and over-kill TCP/IP stack deployed among end-points to accommodate worst network scenario where they are actually communicate in a close proximity LAN environment.</p>
<p>Arguments from either sides are valid to certain extent and I’m not making any judgement here. It just suddenly stuck me when I trying to solve a networking issue seemingly unsolvable with conflicting constrains. That is the starting point of my long journey of rethinking some of the basics in the networking industry. For such a long time we have isolate networking from other information technologies such as compute, storage. We define some protocols and implement some network infrastructures based on very <strong>OLD</strong> assumptions that we networking guys are only responsible to deliver data from A to B fast, reliable and effectively. The additional intelligence we want to baked into network are also for above purposes as well, such as TCP offload, in-network retransmit etc. We never question why we need to do that and unfortunately this holds true for other parts as well.</p>
<p>About 9 years ago I started moving my knowledge stack up from embedded network design into software defined networking (SDN), Intent Based Networking (IBN) and then networked applications. The higher I goes up the stack, the bigger the scope I can see, both problems and solutions, from interface to network element, to site, to multi-site, to global. During the transition, I learnt that something used to be so important even critical at local becomes irrelevant when looked at from a much bigger picture, such as pursuing zero drop rate at interface level, which has very little impact to application, our ultimate goal.</p>
<p>Along this line of thinking, I would argue that even if all parts are optimal, the combination may not be optimal. Why? It’s also strongly depends on how we partitioned the original problem, in other words, how parts are divided and their responsibility defined. We know know we need to optimize networking End-to-End to satisfy application needs rather than focus on some intermediate segment. What we also need to know is that there is <strong>NO fixed boundary</strong> among compute, storage and networking within an application. As I mentioned in the 1st post in <a href="/portfolio/posts/reflection-on-osi-model">NCS-Reflection</a> series, rather than shipping data between compute and storage or between compute units, compute instructions can be shipped or triggered remotely instead under certain circumstance, totally changes the request and requirement for networking part.</p>
<p>To summarize, I think we should not talk about intelligence or optimization in an isolate matter, either just a network segment or network as a whole alone. Intelligence and optimization all must serve the benefit of application hence must be considered as a whole at application level before dividing into networking or computing portions. There is no need for network to exist if it doesn’t serve the compute and storage, on the other hand, if compute and storage treat networking part of interconnection, like ALU to Registers, CPU Core to memory and I/O systems, the world will just like a super big computer, with interconnection among compute nodes and storage nodes. If applications within a computer can be made agnostic (largely) to computer configuration, we sure can make them fly in this globally abstracted supper computer.</p>
<p>This is a challenging time for networking, but it’s also a perfect time for breaking away: Welcome to an era of converged compute, storage and networking, friend!</p></content><author><name>Bin Liang</name><email>[email protected]</email></author><category term="Networked Compute and Storage" /><category term="Insight" /><category term="Technology" /><category term="Solution" /><summary type="html">Reflection on Network Intelligence Remember in my early days in networking industry, Cisco used to fight against Microsoft regarding where the network intelligence should be implemented. Microsoft at the time dominated the PC market and was pushing network features at end-point such as security, acceleration, QoS etc. Network vendors led by Cisco on the other hand wanted to weave most if not all network intelligence into network itself. As we know up till now, similar fights have never ended, just with player and battle fields changes. Unfortunately, this left us some confusing products and terms we still have to deal with up till now, such as UTM devices in the network but truly not a networking device, and over-kill TCP/IP stack deployed among end-points to accommodate worst network scenario where they are actually communicate in a close proximity LAN environment. Arguments from either sides are valid to certain extent and I’m not making any judgement here. It just suddenly stuck me when I trying to solve a networking issue seemingly unsolvable with conflicting constrains. That is the starting point of my long journey of rethinking some of the basics in the networking industry. For such a long time we have isolate networking from other information technologies such as compute, storage. We define some protocols and implement some network infrastructures based on very OLD assumptions that we networking guys are only responsible to deliver data from A to B fast, reliable and effectively. The additional intelligence we want to baked into network are also for above purposes as well, such as TCP offload, in-network retransmit etc. We never question why we need to do that and unfortunately this holds true for other parts as well. About 9 years ago I started moving my knowledge stack up from embedded network design into software defined networking (SDN), Intent Based Networking (IBN) and then networked applications. The higher I goes up the stack, the bigger the scope I can see, both problems and solutions, from interface to network element, to site, to multi-site, to global. During the transition, I learnt that something used to be so important even critical at local becomes irrelevant when looked at from a much bigger picture, such as pursuing zero drop rate at interface level, which has very little impact to application, our ultimate goal. Along this line of thinking, I would argue that even if all parts are optimal, the combination may not be optimal. Why? It’s also strongly depends on how we partitioned the original problem, in other words, how parts are divided and their responsibility defined. We know know we need to optimize networking End-to-End to satisfy application needs rather than focus on some intermediate segment. What we also need to know is that there is NO fixed boundary among compute, storage and networking within an application. As I mentioned in the 1st post in NCS-Reflection series, rather than shipping data between compute and storage or between compute units, compute instructions can be shipped or triggered remotely instead under certain circumstance, totally changes the request and requirement for networking part. To summarize, I think we should not talk about intelligence or optimization in an isolate matter, either just a network segment or network as a whole alone. Intelligence and optimization all must serve the benefit of application hence must be considered as a whole at application level before dividing into networking or computing portions. There is no need for network to exist if it doesn’t serve the compute and storage, on the other hand, if compute and storage treat networking part of interconnection, like ALU to Registers, CPU Core to memory and I/O systems, the world will just like a super big computer, with interconnection among compute nodes and storage nodes. If applications within a computer can be made agnostic (largely) to computer configuration, we sure can make them fly in this globally abstracted supper computer. This is a challenging time for networking, but it’s also a perfect time for breaking away: Welcome to an era of converged compute, storage and networking, friend!</summary><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="http://0.0.0.0:4000/assets/img/software.jpg" /><media:content medium="image" url="http://0.0.0.0:4000/assets/img/software.jpg" xmlns:media="http://search.yahoo.com/mrss/" /></entry><entry xml:lang="en"><title type="html">Reflection on OSI Model</title><link href="http://0.0.0.0:4000/portfolio/posts/reflection-on-osi-model" rel="alternate" type="text/html" title="Reflection on OSI Model" /><published>2020-05-31T00:00:00+00:00</published><updated>2020-05-31T00:00:00+00:00</updated><id>http://0.0.0.0:4000/portfolio/posts/Reflection-On-OSI-Model</id><content type="html" xml:base="http://0.0.0.0:4000/portfolio/posts/reflection-on-osi-model"><h2 id="reflection-on-osi-model">Reflection on OSI model</h2>
<p>In the world of inter-networking, the most popular standard has to be TCP/IP protocol, which forms the basis of the <strong>Internet</strong> connecting billions of computers and devices around the world. It’s so popular and vital to our daily life now a days that it’s used even when communication is local across compatible networks. For example, popular network file system NFS uses IP protocol even though it’s used mainly in homogenous ethernet LAN environment.</p>
<p>The software standards like TCP/IP allowing reliable communication without demanding reliable networks is the key enabling technologies for inter-networking. Looking back the standard battle with ATM in 1990s, one key success factor is it’s decomposed layered hierarchy. Each layer taking responsibility for just a portion of the overall communication task. To avoid ambiguous in terms, Open System Interconnect (OSI) model is developed which describes networks as a series of layers:</p>
<table>
<thead>
<tr>
<th style="text-align: center">Layer number</th>
<th style="text-align: left">layer name</th>
<th style="text-align: left">Main function</th>
<th style="text-align: left">Example protocol</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: center">7</td>
<td style="text-align: left">Application</td>
<td style="text-align: left">Application running over network</td>
<td style="text-align: left">DNS, Web</td>
</tr>
<tr>
<td style="text-align: center">6</td>
<td style="text-align: left">Presentation</td>
<td style="text-align: left">Translate between Application and network format</td>
<td style="text-align: left"> </td>
</tr>
<tr>
<td style="text-align: center">5</td>
<td style="text-align: left">Session</td>
<td style="text-align: left">Session lifecycle management across network</td>
<td style="text-align: left">RPC,named pipes</td>
</tr>
<tr>
<td style="text-align: center">4</td>
<td style="text-align: left">Transport</td>
<td style="text-align: left">Additional connection below session layer</td>
<td style="text-align: left">TCP</td>
</tr>
<tr>
<td style="text-align: center">3</td>
<td style="text-align: left">Network</td>
<td style="text-align: left">Translate between logic address/name to physical address</td>
<td style="text-align: left">IP</td>
</tr>
<tr>
<td style="text-align: center">2</td>
<td style="text-align: left">Data Link</td>
<td style="text-align: left">Transform between packet and bits</td>
<td style="text-align: left">Ethernet</td>
</tr>
<tr>
<td style="text-align: center">1</td>
<td style="text-align: left">Physical</td>
<td style="text-align: left">TX/RX bit stream over physical media</td>
<td style="text-align: left">IEEE 802</td>
</tr>
</tbody>
</table>
<p>The key to protocol families following OSI model is that communication occurs logically at same level of the protocol between sender and receiver, while services from lower level implement it. Just like abstract data types which simplifies the programmer’s task by hiding the implementation details of the data type, offering services needed by upper layer at each protocol layer makes the standard easier to understand and implement with cross industry compatibility, especially for lower level protocol vendors.</p>
<p>For 20 years since mid 1990s, Internet has enjoyed exponential growth thanks to the dominant position of TCP/IP. Network vendors like Cisco took lead and protocols are added and expanded with a bottom up approach to protect the investment in lower level infrastructures. It is until recent years we see some challenges and movements against long-standing TCP/IP stack and traditional network infrastructures (both topology and component levels), driven by OTT providers’ performance, scalability and fault tolerance demands.</p>
<p>The analysis for the huge success of AWS, Azure, GCP and Ali cloud in the past 5-10 years deserves a dedicate study series but here is one important factor from technology point of view: they just need purposely built network for their intent business need, so they built proprietary homogeneous network infrastructure from scratch with optimized and simplified protocol stack, achieving cost, power consumption and networking efficiency.</p>
<p>It is on the right track but start to show limitations facing challenges from:</p>
<ul>
<li>Data volume and dispersal driven by AI and IoT</li>
<li>Dynamic business logic driven by tenant IT migration</li>
</ul>
<p>Simply put, the <strong><em>“simplification”</em></strong> approach OTT providers have enjoyed for their past success is just not enough when facing these new challenges, which demands <strong>REAL</strong> application driven design, with <strong>Top Down</strong> principle.</p>
<p>Back to OSI model, we have to abandon the assumption of TCP/IP, Ethernet, etc and go back to basic:</p>
<ol>
<li>Layers are for separation of responsibilities in a hierarchical way</li>
<li>Upper layer should not aware or concern about lower layer</li>
<li>Lower layer is responsible of fulfilling or rejecting the requests from upper layer</li>
</ol>
<p>Which means, instead of the bottom-up approach where upper layer operates based on what lower level offers, upper layer operates purely based on its own need and send only intent to lower layer, where lower layer has the flexibility to choose available and viable technology to fulfill or reject the request.</p>
<p>Think of a future where we can have harmonized heterogeneous network clusters (devices, topologies or protocols) within data centers and extend into edge, campus even SoC, tailored for different compute-storage usage, totally transparent to applications layer; Think of a laptop where all bandwidth of wired and available WiFi band can be utilized simultaneously; Think of google-search-like information query from application; Think of instructions rather than data been networked to fulfill the request using remote data, etc.</p>
<p>This is a challenging time for networking, but it’s also a perfect time for breaking away: Welcome to an era of converged compute, storage and networking, friend!</p></content><author><name>Bin Liang</name><email>[email protected]</email></author><category term="Networked Compute and Storage" /><category term="Insight" /><category term="Technology" /><category term="Solution" /><summary type="html">Reflection on OSI model In the world of inter-networking, the most popular standard has to be TCP/IP protocol, which forms the basis of the Internet connecting billions of computers and devices around the world. It’s so popular and vital to our daily life now a days that it’s used even when communication is local across compatible networks. For example, popular network file system NFS uses IP protocol even though it’s used mainly in homogenous ethernet LAN environment. The software standards like TCP/IP allowing reliable communication without demanding reliable networks is the key enabling technologies for inter-networking. Looking back the standard battle with ATM in 1990s, one key success factor is it’s decomposed layered hierarchy. Each layer taking responsibility for just a portion of the overall communication task. To avoid ambiguous in terms, Open System Interconnect (OSI) model is developed which describes networks as a series of layers: Layer number layer name Main function Example protocol 7 Application Application running over network DNS, Web 6 Presentation Translate between Application and network format 5 Session Session lifecycle management across network RPC,named pipes 4 Transport Additional connection below session layer TCP 3 Network Translate between logic address/name to physical address IP 2 Data Link Transform between packet and bits Ethernet 1 Physical TX/RX bit stream over physical media IEEE 802 The key to protocol families following OSI model is that communication occurs logically at same level of the protocol between sender and receiver, while services from lower level implement it. Just like abstract data types which simplifies the programmer’s task by hiding the implementation details of the data type, offering services needed by upper layer at each protocol layer makes the standard easier to understand and implement with cross industry compatibility, especially for lower level protocol vendors. For 20 years since mid 1990s, Internet has enjoyed exponential growth thanks to the dominant position of TCP/IP. Network vendors like Cisco took lead and protocols are added and expanded with a bottom up approach to protect the investment in lower level infrastructures. It is until recent years we see some challenges and movements against long-standing TCP/IP stack and traditional network infrastructures (both topology and component levels), driven by OTT providers’ performance, scalability and fault tolerance demands. The analysis for the huge success of AWS, Azure, GCP and Ali cloud in the past 5-10 years deserves a dedicate study series but here is one important factor from technology point of view: they just need purposely built network for their intent business need, so they built proprietary homogeneous network infrastructure from scratch with optimized and simplified protocol stack, achieving cost, power consumption and networking efficiency. It is on the right track but start to show limitations facing challenges from: Data volume and dispersal driven by AI and IoT Dynamic business logic driven by tenant IT migration Simply put, the “simplification” approach OTT providers have enjoyed for their past success is just not enough when facing these new challenges, which demands REAL application driven design, with Top Down principle. Back to OSI model, we have to abandon the assumption of TCP/IP, Ethernet, etc and go back to basic: Layers are for separation of responsibilities in a hierarchical way Upper layer should not aware or concern about lower layer Lower layer is responsible of fulfilling or rejecting the requests from upper layer Which means, instead of the bottom-up approach where upper layer operates based on what lower level offers, upper layer operates purely based on its own need and send only intent to lower layer, where lower layer has the flexibility to choose available and viable technology to fulfill or reject the request. Think of a future where we can have harmonized heterogeneous network clusters (devices, topologies or protocols) within data centers and extend into edge, campus even SoC, tailored for different compute-storage usage, totally transparent to applications layer; Think of a laptop where all bandwidth of wired and available WiFi band can be utilized simultaneously; Think of google-search-like information query from application; Think of instructions rather than data been networked to fulfill the request using remote data, etc. This is a challenging time for networking, but it’s also a perfect time for breaking away: Welcome to an era of converged compute, storage and networking, friend!</summary><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="http://0.0.0.0:4000/assets/img/software.jpg" /><media:content medium="image" url="http://0.0.0.0:4000/assets/img/software.jpg" xmlns:media="http://search.yahoo.com/mrss/" /></entry><entry xml:lang="en"><title type="html">NCS Category readme</title><link href="http://0.0.0.0:4000/portfolio/posts/ncs-readme" rel="alternate" type="text/html" title="NCS Category readme" /><published>2020-05-11T00:00:00+00:00</published><updated>2020-05-11T00:00:00+00:00</updated><id>http://0.0.0.0:4000/portfolio/posts/NCS-Readme</id><content type="html" xml:base="http://0.0.0.0:4000/portfolio/posts/ncs-readme"><h2 id="readme-for-ncd-category">README for NCD Category</h2>
<p>NCS stands for Networked Compute and Storage, representing my interest in converged Compute, Data and networking infrastructure design.</p></content><author><name>Bin Liang</name><email>[email protected]</email></author><category term="Networked Compute and Storage" /><category term="Insight" /><category term="Technology" /><category term="Solution" /><summary type="html">README for NCD Category NCS stands for Networked Compute and Storage, representing my interest in converged Compute, Data and networking infrastructure design.</summary><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="http://0.0.0.0:4000/assets/img/software.jpg" /><media:content medium="image" url="http://0.0.0.0:4000/assets/img/software.jpg" xmlns:media="http://search.yahoo.com/mrss/" /></entry><entry xml:lang="en"><title type="html">Blog Categories and Tags</title><link href="http://0.0.0.0:4000/portfolio/posts/blog-categories-tags" rel="alternate" type="text/html" title="Blog Categories and Tags" /><published>2020-05-10T00:00:00+00:00</published><updated>2020-05-10T00:00:00+00:00</updated><id>http://0.0.0.0:4000/portfolio/posts/blog-categories-tags</id><content type="html" xml:base="http://0.0.0.0:4000/portfolio/posts/blog-categories-tags"><p>After played with Jekyll for a while, I have decided on following structure to organize my blog posts for better navigation and grouping.</p>
<h2 id="category">Category</h2>
<p>Categories are used as top level classifier to seperate blog posts into main area of interests, at the time of written, I’m using following categories:</p>
<ul>
<li><strong>NCS</strong> for Networked Compute &amp; Storage</li>
<li><strong>Knowledge Graph</strong></li>
<li><strong>Exploration</strong> serves as a kitchen sink for all study, hobby etc for now</li>
</ul>
<h2 id="tag">Tag</h2>
<p>Tags are attributes helping to further filtering/grouping posts based on their characteristics. There is <strong><em>NO</em></strong> real differences between category and tag, I choose some as categories merely to group contents by my major areas of interest first.</p>
<p>I split Tags into 4 dimensions:</p>
<ol>
<li>To what extent:
<ul>
<li><strong>Insight</strong> focus on Obervation, facts and impression</li>
<li><strong>Forsight</strong> focus on deepening and broadening the analysis, forming opinion, prediction and design</li>
<li><strong>Practice</strong> forcus on test, evalution and discovery</li>
</ul>
</li>
<li>From which angle/perspective:
<ul>
<li><strong>Market</strong> focus on market trend, customer demand</li>
<li><strong>Industry</strong> focus on industry trend, solutions</li>
<li><strong>Technology</strong> focus on fundamental acdamic research</li>
</ul>
</li>
<li>Applicable to what sub-area:
<ul>
<li><strong>Infrastructure</strong></li>
<li><strong>Solution</strong></li>
<li><strong>Application</strong></li>
<li><strong>Usecase</strong></li>
</ul>
</li>
<li>Keywords (Multiple)
<ul>
<li><strong>Jekyll</strong></li>
<li><strong>Liquid</strong></li>
<li><strong>DCN</strong></li>
<li><strong>DCI</strong></li>
<li><strong>Edge</strong></li>
<li><strong>Campus</strong></li>
<li><strong><em>etc</em></strong></li>
</ul>
</li>
</ol>
<p>As you can see tags within dimensions 1-3 are mutually exclusive by nature, so they are more like sub-categories. But with current jekyll limition, I will leave them as tags and explot other options like <a href="https://github.com/sverrirs/jekyll-paginate-v2">paginate v2</a> later.</p></content><author><name>Bin Liang</name><email>[email protected]</email></author><category term="Exploration" /><category term="Practice" /><category term="Technology" /><category term="Solution" /><category term="Jekyll" /><summary type="html">After played with Jekyll for a while, I have decided on following structure to organize my blog posts for better navigation and grouping. Category Categories are used as top level classifier to seperate blog posts into main area of interests, at the time of written, I’m using following categories: NCS for Networked Compute &amp; Storage Knowledge Graph Exploration serves as a kitchen sink for all study, hobby etc for now Tag Tags are attributes helping to further filtering/grouping posts based on their characteristics. There is NO real differences between category and tag, I choose some as categories merely to group contents by my major areas of interest first. I split Tags into 4 dimensions: To what extent: Insight focus on Obervation, facts and impression Forsight focus on deepening and broadening the analysis, forming opinion, prediction and design Practice forcus on test, evalution and discovery From which angle/perspective: Market focus on market trend, customer demand Industry focus on industry trend, solutions Technology focus on fundamental acdamic research Applicable to what sub-area: Infrastructure Solution Application Usecase Keywords (Multiple) Jekyll Liquid DCN DCI Edge Campus etc As you can see tags within dimensions 1-3 are mutually exclusive by nature, so they are more like sub-categories. But with current jekyll limition, I will leave them as tags and explot other options like paginate v2 later.</summary><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="http://0.0.0.0:4000/assets/img/software.jpg" /><media:content medium="image" url="http://0.0.0.0:4000/assets/img/software.jpg" xmlns:media="http://search.yahoo.com/mrss/" /></entry><entry xml:lang="en"><title type="html">Markdown and HTML</title><link href="http://0.0.0.0:4000/portfolio/posts/markdown-and-html" rel="alternate" type="text/html" title="Markdown and HTML" /><published>2020-04-05T00:00:00+00:00</published><updated>2020-04-05T00:00:00+00:00</updated><id>http://0.0.0.0:4000/portfolio/posts/markdown-and-html</id><content type="html" xml:base="http://0.0.0.0:4000/portfolio/posts/markdown-and-html"><p>Jekyll supports the use of <a href="http://daringfireball.net/projects/markdown/syntax">Markdown</a> with inline HTML tags which makes it easier to quickly write posts with Jekyll, without having to worry too much about text formatting. A sample of the formatting follows.</p>
<p>Tables have also been extended from Markdown:</p>
<table>
<thead>
<tr>
<th>First Header</th>
<th>Second Header</th>
</tr>
</thead>
<tbody>
<tr>
<td>Content Cell</td>
<td>Content Cell</td>
</tr>
<tr>
<td>Content Cell</td>
<td>Content Cell</td>
</tr>
</tbody>
</table>
<p>Here’s an example of an image, which is included using Markdown:</p>
<p><img src="/assets/img/pexels/book-glass.jpeg" alt="Image of a glass on a book" /></p>
<p>Highlighting for code in Jekyll is done using Base16 or Rouge. This theme makes use of Rouge by default.</p>
<figure class="highlight"><pre><code class="language-js" data-lang="js"><span class="c1">// count to ten</span>
<span class="k">for</span> <span class="p">(</span><span class="kd">var</span> <span class="nx">i</span> <span class="o">=</span> <span class="mi">1</span><span class="p">;</span> <span class="nx">i</span> <span class="o">&lt;=</span> <span class="mi">10</span><span class="p">;</span> <span class="nx">i</span><span class="o">++</span><span class="p">)</span> <span class="p">{</span>
<span class="nx">console</span><span class="p">.</span><span class="nx">log</span><span class="p">(</span><span class="nx">i</span><span class="p">);</span>
<span class="p">}</span>
<span class="c1">// count to twenty</span>
<span class="kd">var</span> <span class="nx">j</span> <span class="o">=</span> <span class="mi">0</span><span class="p">;</span>
<span class="k">while</span> <span class="p">(</span><span class="nx">j</span> <span class="o">&lt;</span> <span class="mi">20</span><span class="p">)</span> <span class="p">{</span>
<span class="nx">j</span><span class="o">++</span><span class="p">;</span>
<span class="nx">console</span><span class="p">.</span><span class="nx">log</span><span class="p">(</span><span class="nx">j</span><span class="p">);</span>
<span class="p">}</span></code></pre></figure>
<p>Type on Strap uses KaTeX to display maths. Equations such as \(S_n = a \times \frac{1-r^n}{1-r}\) can be displayed inline.</p>
<p>Alternatively, they can be shown on a new line:</p>
\[f(x) = \int \frac{2x^2+4x+6}{x-2}\]</content><author><name>Sylhare</name></author><category term="Exploration" /><category term="Practice" /><category term="Technology" /><category term="Solution" /><category term="Jekyll" /><summary type="html">Jekyll supports the use of Markdown with inline HTML tags which makes it easier to quickly write posts with Jekyll, without having to worry too much about text formatting. A sample of the formatting follows. Tables have also been extended from Markdown: First Header Second Header Content Cell Content Cell Content Cell Content Cell Here’s an example of an image, which is included using Markdown: Highlighting for code in Jekyll is done using Base16 or Rouge. This theme makes use of Rouge by default. // count to ten for (var i = 1; i &lt;= 10; i++) { console.log(i); } // count to twenty var j = 0; while (j &lt; 20) { j++; console.log(j); } Type on Strap uses KaTeX to display maths. Equations such as \(S_n = a \times \frac{1-r^n}{1-r}\) can be displayed inline. Alternatively, they can be shown on a new line: \[f(x) = \int \frac{2x^2+4x+6}{x-2}\]</summary><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="http://0.0.0.0:4000/assets/img/software.jpg" /><media:content medium="image" url="http://0.0.0.0:4000/assets/img/software.jpg" xmlns:media="http://search.yahoo.com/mrss/" /></entry></feed>