Real Time Operating Systems

Real time operating system (commonly called as RTOS) is a software stack responsible for managing the workload and resources in embedded systems. The core of a microcontroller can execute only one task/ thread at a time, and given the capacity of modern days microcontroller, it would not be use of it’s full potential if they are used for any one specific task (such as taking inputs from a sensor or blinking an LED). Main Job of an RTOS is to rapidly switch between different tasks, which feels like multiple programs are being executed simultaneously on a single core. [1,2]

The term real-time in RTOS represents the very nature of RTOS being event-driven and pre-emptive. An event-driven programming paradigm means that the flow of the program is decided by occurrence of events (such as user actions, sensors values, messages from other programs etc.). Pre-emptive nature of RTOS means that a task can be postponed based on a temporary interrupt being generated from external sources (outside the current task) ,and resumed at a later time.

If you have tried to understand architecture of the microcontrollers and microprocessors, a term called Kernel must have crossed by you. Have you wondered, what is a kernel and what it does? Kernel is basically a set of computer program at the core of an OS or RTOS, which controls everything in system. For a basic understanding you can consider kernel as an interface between user and the hardware. There are 5 types of Kernel architectures: micro Kernel, monolithic Kernel, hybrid kernel, exokernel and nanokernel. Out of these five, micro Kernel and monolithic Kernel are most commonly used.

An operating system or RTOS can be divided in two categories: User services and kernel services. User services is combination of application layers and the space where users can interact with the machine. Kernel services are dedicated for hardware allocation to the the requests. In monolithic architecture, the entire RTOS/OS runs as a single program (both user space and kernel space in the same address space) where as in micro-kernels, kernel services and user services are different program running in different address space. Monolithic kernels are bigger in size with fixed number of user applications while in microlithic architectures, kernel size is relatively smaller and all the required features can be added. Micro Kernels are the newer trend because of its nature of feature addition. Linux foundation maintains a famous RTOS called Zephyr which is based on microkernel architecture. However, it consists of a small nanokernel as well. FreeRTOS is a commonly used RTOS which is also having MicroKernel.

RTOS is classified in 3 categories: Hard RTOS, Soft RTOS and firm RTOS. In hard RTOS, the critical tasks are guaranteed to be completed within time range (example: Air traffic control system, medical imaging system etc.). In soft RTOS, some relaxation of time is provided for the tasks (example: multimedia system). In firm RTOS, a deadline is being followed but the impact of missing deadline is smaller.

How to choose a RTOS for your project depends on several factors. There are several RTOS available to choose. Wikipedia has a comparison table for over three dozen of RTOS [3]. Some of these RTOS are open source while others are not. [4] has explained some parameters to consider before choosing an RTOS in a detailed blog. Some main considerations are: memory size, RTOS scalability, shell access (command line interface to RTOS), memory management unit support etc.

This article aims to list some common RTOS with their main features, supported microcontroller boards, number of active contributors and users, kernel size, communication protocols. It focuses on three common open source RTOS : FreeRTOS, Zephyr and RIOT.

FreeRTOS is one of the most famous RTOS available open source and is supported by Amazon which is integrated with several architectures (Arm cortex M7, M3, M4, M0+, A5, Arm7, ARM9, Tensilica Xtensa, RISC-V) and used on STM32, ESP32 and Arduino boards. The popularity of FreeRTOS helps you solve any issues and as it is supported by amazon, it comes with several API to integrate with Amazon Web Services [6, 7].

Zephyr was developed by Virtuoso system and later integrated with the Linux Foundation and currently it is the largest open source RTOS project [5]. As mentioned on it’s Github page, it has 124 repositories and 4730 forks [8: accessed on 28/04/2023]. Zephyr project has active contribution from Nordic, intel, ST, Espress-if and Wind river. The range of supported APIs in zephyr makes it an interesting choice for developers. If you scroll the list of APIs in the list provided by [9], the list includes more number of APIs compared to FreeRTOS and RIOT.

RIOT was developed by academic institutions like INRIA, FU Berlin and HAU Hamburg [10]. It has a tiny kernel with microkernel architecture and supports programming languages as C, C++ and rust. The source code is available on GitHub and it has 1886 forks with 29 repositories [11: accessed on 28/04/2023]. It supports several architectures (AVR, MSP430, ESP8266, ESP32, RISC-V, ARM7 and ARM Cortex-M) and can be integrated with several boards. It has a list of supported drivers provided by [12].

Table below is created by searching for some important features about these three RTOS and for some on them, it provides a link as well.

FeaturesZephyrFreeRTOSRIOT
Supported Boardshttps://docs.zephyrproject.org/3.2.0/boards/index.htmlMicrocontrollers and compiler tool chains supported by FreeRTOSRIOT – The friendly Operating System for the Internet of Things (riot-os.org)
Supported ArchitectureArc, arm,arm64,mips,riscV,sparc,x86,xtensa,posix, nios2Am cortex M7,M3,M4,M0+,A5,Arm7,ARM9, Tensilica Xtensa, RISC-VAVR, MSP430, ESP8266, ESP32, RISC-V, ARM7 and ARM Cortex-M
No Of Github Forks473011731879
No of Github repositories1243729
LicensesApache 2.0MITLGPL 2.1
I2Cyes (i2c – drivers/i2c – Zephyr source code (v3.3.0) – Bootlin)YesYes (https://github.com/RIOT-OS/RIOT/blob/master/drivers/periph_common/i2c.c )
I3CYes (i3c – drivers/i3c – Zephyr source code (v3.3.0) – Bootlin)
SPIyesYesYes (https://github.com/RIOT-OS/RIOT/blob/master/drivers/periph_common/spi.c )
UARTYesYesYes (https://github.com/RIOT-OS/RIOT/blob/master/drivers/periph_common/Kconfig.uart )
USByesYes (FreeRTOS + USB – Libraries – FreeRTOS Community Forums)Yes (https://github.com/RIOT-OS/RIOT/blob/master/drivers/periph_common/Kconfig.usbdev )
WiFiYes (wifi – drivers/wifi – Zephyr source code (v3.3.0) – Bootlin)Yes (FreeRTOS (Part 7): FreeRTOS, TCP and the Internet – Circuit Cellar)Yes (ESP8266 / ESP8285 (riot-os.org))
CANyesYes (RTOS with CAN Bus – FreeRTOS)Yes (https://doc.riot-os.org/group__drivers__can.html )
DSPYes (https://docs.zephyrproject.org/latest/services/dsp/index.html )
CryptographyYes (https://elixir.bootlin.com/zephyr/latest/source/drivers/crypto )Yes (Security for Arm Cortex-M devices with FreeRTOS – FreeRTOS)
Languages SupportedC and C++(Language Support — Zephyr Project Documentation)Rust (Embedding Rust Into Zephyr Firmware Using C-bindgen – Zephyr Project)C and Python
Rust (freertos_rs – Rust (docs.rs))
C, C++, Python, Rust (Work-in-progress)
Cloud Applications SupportedYes with Golioth(https://www.youtube.com/watch?v=lS6-nmHnlTg&t=173s )Yes With AWS IoT OTA, AWS IOT Device Shadow, AWS IoT jobs, AWS IoT Device Defender, AWS IoT Fleet provisioning
Example Projectshttps://www.zephyrproject.org/learn-about/applications/#hereo An RTOS demo that is hardware independent (freertos.org)ESP8266 / ESP8285 (riot-os.org) 
Anchor Contributors/ maintainerLinux foundation, Nordic, ST microelectronics, intel, wind river, Espress-ifReal time Engineers Ltd, Amazon, Arm, Cypress, Espressif, microchip, renesas, RIsc-V, seggar, ST, TI, Infineon,NXPInria, MLPA GMBH, Freie Universitat Berlin, 

Choice of RTOS is dependent on architecture of your system and the applications you are aiming. This article suggests you to check for the supported architecture and features required on the links provided for any RTOS and then decide. Every one of them comes with their merits and it’s helpful to read about them.

References:

[1] https://en.wikipedia.org/wiki/Real-time_operating_system

[2] https://www.geeksforgeeks.org/real-time-operating-system-rtos/

[3] https://en.wikipedia.org/wiki/Comparison_of_real-time_operating_systems

[4] How to Choose a Real-Time Operating System (lynx.com)

[5] Zephyr Project – Zephyr Project

[6] https://lembergsolutions.com/blog/freertos-vs-zephyr-project-embedded-iot-projects

[7] FreeRTOS · GitHub

[8] Zephyr Project (github.com)

[9] https://elixir.bootlin.com/zephyr/latest/source

[10] RIOT (operating system) – Wikipedia

[11] RIOT · GitHub

[12] RIOT – The friendly Operating System for the Internet of Things (riot-os.org)

Toolchain of Arduino IDE for embedded systems

IDE (Integrated development environment) is a software development platform which provides all the necessary tools to developers for software development. A typical IDE consists of a source code editor, build system, compiler/ interpreter, debugger, uploader and serial monitor.

Arduino IDE is a commonly used IDE among hobbyists, tinkerers and researchers[1]. The IDE is simple too use and board select provides easy options to integrate the boards which are even outside the Arduino world (like Raspberry, ESP32[2], STM32[3] etc). The IDE consists of source code editor, compiler, language support, build system, debugger, and serial monitor. The source code editor in Arduino IDE is known as Sketch book, where option of creating multiple sketches, editing and saving is available. Arduino IDE uses a GNUCC (GNU compile collection) as compiler which is a open source compiler created by GNU and distributed under BSD -3 clause permissive license.

Sketch build process is comprised of multiple steps taken from writing/editing the source code to uploading it to a microcontroller board. Main tasks are generating build script, managing dependencies, configuring build options, defining build targets and cross-compiling. Arduino has explained their build process stepwise in their documentation [4], [5]. A summary of these steps are shown in the block diagram below.

Pre-processing is done to convert the .ino files into .cpp file. If header file is not included, #include<Arduino.h> is added which includes all the Arduino core standard definitions required for the program. An important thing to notice is that if the sketches have any other extension than .ino and .pde, they don’t go for preprocessing.

Dependency Resolution is an important step in the build process where all the dependencies in the sketch is scanned recursively and then searched to predefined paths: Core Library folder, Variant Folder, system directories, included search paths. If the dependency is not present in these paths, then it is searched in installed libraries. It makes a file with .d extension which will comprise of all the dependencies.

Next step is compilation of sketch where the sketch is compiled (GCC for Arduino boards) for any selected board. A sketch might not work which is compiled for a different board. In the Arduino IDE a tick button is provided to compile. Compiler uses the .cpp file (generated from preprocessing where .ino/.pde sketch was converted to .cpp), .c and .S files to generate a .o extension file. An important step is taken to save time if there is already .o extension and .d extension file available with timestamp newer than the source code (means no changes from previous compilation).

Final .hex file is created by linking the .o file with static libraries. It removes the part not required in sketch to reduce the size of file to be uploaded on board. This .hex file is considered to be the final output of compiler which is to be uploaded on board.

For uploading the .hex file, a button named upload with symbol of a front arrow is provided on top panel of the IDE. For Arduino boards the used uploader is called avrdude.

References:

[1] https://docs.arduino.cc/software/ide-v1/tutorials/arduino-ide-v1-basics#serial-monitor

[2] https://randomnerdtutorials.com/installing-the-esp32-board-in-arduino-ide-windows-instructions/

[3] https://www.sgbotic.com/index.php?dispatch=pages.view&page_id=48

[4] https://arduino.github.io/arduino-cli/0.20/sketch-build-process/

[5] https://arduino.github.io/arduino-cli/0.20/platform-specification/

Plastic – The Eco-Friendly Option

Seems Ironic, the topic of this fortnight’s discussion. Let us start by defining the term ‘Eco-Friendly’ – To begin with there is no specific definition, if you look it up online, “Any goods or services that have a low or negligible impact on the ecosystem or environment”.

Most companies today use this term ambiguously (cannot point fingers) to market their products or services. We need to agree upon a statement before we move forward. Let us say that any product of the same performance/usage, that has a lower impact on the environment than its predecessor. We will not use the term ‘eco-friendly’ henceforth, but discuss the overall environmental impact of any product made of different materials and see where, plastics that are the ‘evil’ of today’s society stand.

Let us begin by focusing on one product that has seen a shift in material usage due to its environmental impact – Plates and Straws.

Since the past few decades, the go to option to eat and drink on the go was plastic straws, plastic cups, plastic plates. Now, these have killed fishes, livestock, have entered our bodies as micro-plastics, and have leached into the ground water and destroyed soil and many more things. To save the environment, we are switching to paper straws, paper plates, paper cups.

Of course, the paper alternatives are much more expensive than their plastic counterparts but, that is just economies of scale at play for the most part and should be ignored. Performance of the paper counter-part are not at par with plastic. They cannot hold liquids for a long time, the straws need to be much thicker, making it uncomfortable to use, oil leaks through from paper plates, they are weaker structurally.

Paper Plates vs Plastic plates

  1. Weight and its impact: Paper Plates need to be thicker for the same performance than plastic plates. Since our entire logistics is based upon Fossil Fuels (even Electric Vehicles burn coal or solar panels and very energy intensive to manufacture – we will not go there, it a whole different discussion) – the environmental impact of transporting paper plates for the same distance in much higher.
  2. Post Processing: Are all Paper Plates made just of wood pulp (paper)? Obviously, NO – Various processes are done to make them white, or durable or whatever, these use chlorine-based agents for bleaching, is this safer than plastic?
  3. Trees: We have all read, trees are a carbon sink, but how many trees are being cut down to make paper objects and are they better than plastics?

These are just some of the points we are looking at when we compare paper and plastics.

Most of the above points have been left at a question, this is because the data available is hugely varied and cannot be pinpointed at, also you can think of other points further from the same, eg: Paper plates cannot be washed, hence cannot be recycled, but plastic can be.

Now let us come to the major point of discussion where we can see how plastics might be better than “eco-friendly” materials in aspects where almost all environmentalists shun plastics for. We will also discuss the larger picture, but leave it at a question….

Let’s start with bio-degradable materials, most often then you would imagine, bio-degradable materials are more energy intensive to manufacture. These materials require many processes to get to a stage where they are practically usable. These materials are often fused with other non-biodegradable materials, e.g.: Paper cups are coated with plastic to make them durable, or beached and dyed to make them attractive. The next point is quite interesting – Most of these products anyways end up in landfills, they degrade into methane (causes global warming), and other organic materials, that leech into the soil and ground water, now since these materials have been processed with various chemical, they too leech into the environment. The decaying of these materials creates a whole new eco-system for micro-organisms that may or may not be harmful to the existing flora and fauna. Is it safer than plastic?

Coming to Plastics, the most common plastic used is ABS (Acrylonitrile Butadiene Styrene) which is a polymer made from extracted compounds primarily from petroleum and other sources. The first question we need to ask is, is it more energy intensive to make ABS from petroleum or the impact cutting a tree has and then the processes to get paper products. A tree takes years to grow, it requires land, water, soil, air, sunlight, etc. If we consider the energy a tree requires to grow over its lifetime and the energy it requires to cut, transport, and process them, is it less than making ABS? A question no one seems to care about. And let’s not get into “ethical” sourcing of raw materials, most of the trees are illegally cut from rainforests.

Why am I rooting for plastics, they stay in the ecosystem forever. Exactly, they stay forever, that means they are almost infinitely reusable! And obviously melting a chair and making a new one is less energy intensive than any other alternative currently available.

The problem is very simple, its is economics and desire! We expect a range of different designs and colors for any product, chairs are usually made of ABS, but there are white chairs, blue, red, green, blah-blah colors of chairs, some have steel handles, some have gold trims and so many more. This causes the main issue in the plastic ecosystem, if two unidentical plastics and heated together, the resultant product is often less desirable in both physical and chemical properties. The only problem is segregation of materials, this where economics don’t allow it. If tomorrow all chair manufacturers standardize their materials and have only a few designs available, there will be, in a few years enough waste chairs to have an economically viable chair recycling plant and yes, the earth would be better-off.

A great example in this trillion-dollar tech company named after a fruit, who only make and sell 3-4 models of a mobile phones a year, that means their parts, processes and materials are almost same across the line-up and they have setup an initiative to buy back old phones, they have made a robot to disassemble, and segregate all parts of these phone that can be easily recycled and made into new phones. A source (not sure of the reliability) says they have been able to extract the same amount of Gold and Copper from 1Ton of phone that would require 2000Tons of mined earth. That did make a difference, don’t you think…

So, at the end I would like to conclude this rather long article (still missed a few points), by saying, the material is not the problem, it is the recycling of these materials that is the problem. We can argue about the pros and cons of any material out there, be it metal, plastic, paper, sugarcane pulp, or whatever. As long as they end up in a landfill or are incinerated, it makes no sizable difference to the earth, to the ecosystem. What we need is less variety of products and a more standardized construction. I am a mechanical designer myself, and I don’t see why, companies cannot design products that are easily separatable into its components or why can’t certain things be standardized by the governments. If all cars (taking about your average city car), use the same material for the dashboard without “sticking” other non-essential materials on it, it becomes economically viable to recycle them on a large-scale, making plastic “eco-friendly”. But if companies do this then anyone can make after market parts easily and sell it, this will reduce the company’s profits.

If you think about it, any product can be made standard and can be made easy to recycle or be completely eliminated… But profits, economics and politics….

P.S. – I am not rooting for any material, just asking questions…..

Thanks For Reading………

Keeping thinking, keep questioning, Be Curious……………………

Musical Haptics-I

Musical haptics is an emerging interdisciplinary field incorporating touch and proprioception in music scenarios from the perspectives of haptic engineering, human–computer interaction (HCI), applied psychology, musical acoustics, aesthetics, and music performance.

According to [1], whenever an acoustic or electroacoustic musical instrument produces sound, that comes from its vibrating components (e.g., the reed and air column in a clarinet, or the strings and soundboard of a piano). While performing on such instruments, the haptic channel is involved in a complex action–perception loop: The player physically interacts with the instrument, on the one hand, to generate sound by injecting energy in the form of forces, velocities, and displacements (e.g., striking the keys of a keyboard, or bowing, plucking, and pressing the strings of a violin), and on the other hand receiving and perceiving the instrument’s physical response (e.g., the instrument’s body vibration, the kinematic of keys being depressed, the resistance and vibration of strings). One could therefore assume that the haptic channel supports performance control (e.g., timing, intonation) as well as expressivity (e.g., timbre, emotion). In particular, skilled performers are known to establish a very intimate, rich haptic exchange with their instruments, resulting in truly embodied interaction that is hard to find in other human–machine contexts.

Music performance present a well-stablished framework to study basic psychophysical, perceptual, and biomechanical aspects of touch and proprioception, all of which may inform the design of novel haptic musical devices. There is now a growing research body of scientific studies of music performance and feelings or perception from which to inform research in musical haptics, including topics and methods from the fields of psychophysics, biomechanics, music education, psycholinguistics, and artificial intelligence.[2]

The goals of musical haptics research may be summarized as: (i) to understand the role of haptic interaction in music experience and instrumental performance, and (ii) to create new musical devices yielding meaningful haptic feedback.

One major drawbacks in the existing musical setups is unable to produce lower frequency base in the music. When someone experiences music into a club or concert, the feeling is completely different than experiencing it through a headphone. Music’s experience is beyond hearing and stimulation of cochlea. The feeling of tickling through the stomach and little tingling on the surface of whole skin endorses the feelings of total immersion. Musical haptics promises to deliver the feeling of a club to a person who wants to experience it without disturbing others. Assume the feeling of music while jogging in a public park or in a metro rail and that too with the complete base.

As a background, understanding haptics is necessary to design the musical haptics. Haptics has two aspects: tactile and kinesthetics. Tactile perception is a feeling of the stimulation of mechanoreceptors which lies in the dermis and epidermis layers of skin whereas kinesthetics is the proprioception (movement of muscles, limbs, fluids in the body). Music intend to stimulate both the categories: and to achieve that two different actuators are needed. To stimulate the mechanoreceptors, tactile actuators are used such as Linear resonance actuators (LRAs), Eccentric ring motors (ERMs) and piezo actuators. To stimulate the proprioception higher stimulation is required and bigger size tactile actuators are used.

Furthermore, both the performers and the audience are reached by vibration conveyed through air and solid media such as the floor and the seats of a concert hall. Those vibratory cues may then contribute to the perception of music (e.g., its perceived quality) and of instrumental performance (e.g., in an ensemble, a player could be able to monitor others’ performances also through such cues). These are the lost feelings when one jog in a park while putting a headphone. Haptics promises to bring these cues back to the people and contribute to the umwelt. Keep reading blogs in this series for the existing state of the art technologies, what is lacking in them and what future promises.

reference:

[1]. Papetti, Stefano, and Charalampos Saitis. Musical haptics. Springer Nature, 2018.

[2]. Berdahl, Edgar, Günter Niemeyer, and Julius O. Smith III. “Using Haptics to Assist Performers in Making Gestures to a Musical Instrument.” NIME. 2009.

To be Continued >>>

Hey Sssiri?

Voice assistant devices have become an integral part of our current technological environment: smartwatches, smartphones, smart-earphones, smart-homes and so much more lays ahead in the future. These devices allow users to make phone calls and send texts, do a quick web search, look up the weather, even control other smart devices (the coolest control I encountered was dim the lights) with simple voice commands and no physical interaction with objects around (One can even ask Siri to sing a song or tell a joke). There is freedom from wires, buttons and keyboards!

I recently learnt about 70 million people worldwide suffer from stuttering, cause could be physiological, neurological or trauma based. That’s roughly 1% of the global population. And this is one just speech disorders. There are many! The functionality (or the limitation?) of the voice controlled smart devices relies on the clarity of the command. See where I am going with this?

While voice control of technology is becoming more accurate (with extensive machine learning to even recognize accents!), accessing such technology for stutterers still remains a huge hurdle. Current voice assisted systems fail to identify and intelligibly understand disjoint/broken speech. This limitation is also faced by individuals with other speech disorders. According to research by Frank Rudzicz (Assistant Professor, University of Toronto), for individuals with dysarthria, the word-recognition rates for such technology can be between 26.2% and 81.8% lower than for the general population. Can this be improved upon?

While manufacturers of these devices boast of reduced world recognition error when commanded upon in regular speech with extensive training of devices on more data from different speakers, what makes accommodating speech disorders is the randomness of when speech gets affected in speech disorders, and the unpredictability of which part of speech gets affected. Stuttering can occur anytime. Any word can “trip” the speaker to stutter. While some stuttering patterns can be identified (if there are specific sounds that the stutterer finds difficult, is the stuttering at the beginning of the word vs. the middle of it etc.), these patterns are almost unique for each stutterers (However, don’t be discouraged, stutterers can be grouped together in how they stutter). Additionally, there is a wide range in the severity of these disorders, thus, creating one computational model that fits all is not possible.

With voice-enabled technology getting more embedded in our lives (I just saw smart, voice-controlled coffee makers) there is a need for more inclusivity in technology. If they are looking for more data to train their AIs, I am available!

Square v/s Hexagonal Grid

Sometime back we were faced with the challenge of representing alphabets / characters in the least no. of cells (so-to-say). This was planned to be done in a swipe fashion i.e only one cell is “active” or ON at a time.
We assumed a 3×3 matrix / array would be the best and optimum solution, as an x-y or a Row-Column structure is what we are so blindly used-to following.

But as we progressed , I quickly realized that these alphabets / characters looked rather disfigured and the experience was so not satisfactory.

Our English language has lots of curvy characters and a traditional row – column matrix (especially because of our requirement restrictions of such a small number of cells 3×3=9 ) was not doing adequate justice.

And then it struck me – A beehive.

The hexagonal grid!

As I tried to set the placements right, to cover our English character set.

I found the use of 2 overlapping circles with the lowermost “cell location” left blank as the most optimum placement pattern.

Most of our alphabet set looks good in this hexagonal grid formation.

Just pause a minute and imagine how these characters would look like in a simple 3×3 square matrix array.

With the same number of cells as the 3×3 matrix i.e 9 cells, and just a 50% offset in the placement of the second column, a far better visualization of the English character set can be achieved with the Beehive / Hexagonal grid as against the traditional row-column or x-y grid.

Sound as a construction material

Cast my first SuperCollider spell today. The code below spat out an .aiff file. This blogging platform may not allow me to embed that here in a straightforward manner so used VLC to convert it to an .mp3 file (which is what this 2 second clip is).

(
/* 34567890123456789012345678901234567890123456789012345678901234567890123456789
h (h for Hanoi) is a 4 parameter function. f, t and u are are the names of 3
pegs. when h is not called by itself there are n > 0 discs (each of a different
size) on peg f and 0 discs on the other two pegs.

h returns a series of moves such that at the end of the series all the discs
have been shifted from peg f (f for From) to peg t (f for To). during the series
some of the discs may temporarily be shifted to peg u (u for Using).

each move consists of a single disc being moved from peg f to peg t. at no point
is a disc placed on top of a disc of a smaller size. when displaying a move we
don't display the size of the disc being moved (as it doesn't effect the series) 

m is a function which moves a single disc from peg f to peg t. a move is 
displayed as a pair of sine wave oscillators playing for two seconds. the peg
form which the disc is being removed has its frequency decrease from 100 Hz 
above its label to 100 Hz bellow while its amplitude decreases to zero. the
inverse happens for the peg onto which the disc is being moved.

as written all the moves play concurrently, making the solution
incomprehensible. the last line shows the author hasn't understood
SuperCollider's Client vs Server architecture

*/

s.boot;

h = {|n f t u| // tower of hanoi algorithm
	if (n > 1,{
		h.value(n - 1, f, u, t); // move all but the lowest disc from f to u
        h.value(1, f, t, u); // move the lowest disc from f to t
		h.value(n - 1, u, t, f); // move all the discs earlier moved to u to t
	},{	m.value(f, t);// else move a single disc from f to t
})};

m ={|f t| // sound algorithm
	//(f.asString ++ " -> " ++ t.asString).postln; // textual debugging hanoi
	// if you delete the rest of this function and un-comment the previous line
	// a 'correct solution' is printed.
	{[SinOsc.ar( // 1 of 2 channels, increasing frequency and volume
			freq:Line.kr(start:t-100, end:t+100, dur:2, doneAction:2),
			mul:Line.kr(start:0, end:1, dur:2, doneAction:2)
		),SinOsc.ar( // 2 of 2 channels, decreasing frequency and volume
			freq:Line.kr(start:f+100, end:f-100, dur:2, doneAction:2),
			mul:Line.kr(start:1, end:0, dur:2, doneAction:2)
	)]}.play;
	while ({s.numSynths != 0;},{2.wait;}); // a pair of conceptual errors
};

s.record;
h.value(1, 600, 1200, 1800);
s.stopRecording;
)

The touch of COVID-19, and haptics

22nd March 2020, India announced its first nationwide junta curfew, as a stint to reduce mass gatherings and implement social distancing, courtesy of COVID-19. Little was known that this one-day lockdown will be extended into a six-month long stint (with advisory for the current month: step out only when needed!). I saw the effect of this lockdown not just as the increasing quarrels amongst family members but also in their longing to meet friends and other relatives who were now only seen through a 6-inch screen.  In the last six months, I have seen an increasing reliance on technology: not just Netflix-ing and chilling but also surge in video calls to local friends and family.

Prior to the lockdown, I never realized one could miss high-fives and hugs. Aside from highlighting the faults of healthcare systems worldwide, COVID-19 successfully managed to create isolation bubbles. Now imagine, what if you could hug someone virtually though a call (Especially if that loved one were stuck in the hospital alone because of COVID, and no physical contact allowed whatsoever)? Enter haptics.

Our current common knowledge of haptics is often limited to the buzz of our phones or smart watches/bands, with feedback in the form of vibratory notifications. While, there is a surge of research to understand the implementation of haptic effects in the gaming industry to make VR games more realistic, there is a lack of affordable consumer-ready products.

Where does this leave us? In a world where everything suddenly went online, there is a need for technology catering to optimize video calls. While Facebook and Microsoft are geared up their research to make video calls haptics augmented, and hope to exploit the wonders that 5G internet promises, there stands a massive challenge of cost-optimizing these theoretical ideas (Sorry Facebook, the Oculus VR set is expensive!). The quality of haptics is dependent on the components of the motor that creates the vibrations. Better quality implies higher costs.  COVID has created distance but I am hopeful technology will help reduce it (an increased global market projection of the haptics industry also makes me optimistic)! Funnily enough, I got a job during lockdown, and haven’t met my team in person yet. I signed the contracts without literally shaking on them, which felt odd. Reminiscing the ol’ times where team members introduced each other and shook hands! Now imagine if a system for remote handshaking existed, wouldn’t that be amazing?

How haptic Augmentation is improving the umwelt?

The way we perceive the world around us is dependent upon the senses inherited. Humans are blessed with five primary senses: Touch, olfactory sense, taste, hearing and vision. For all these primary senses, there is a dedicated portion in brain which works as processing unit and they have a receptor organ. Vision has most of the part in brain while touch has the longest reception region. To enable the sense of touch different receptors are spread beneath the layers of skin over the entire body, which are responsible for different aspects of touch. Touch is divided into two major categories: active and passive touch. Active touch is when user is actively exploring the world around using their skin and passive touch is when the world around the user is touching the user, like the shirt user is wearing is touching passively all the time. Objective of haptic augmentation is to use these massive numbers of receptors which are mostly unused all the time and provide the user information to the world around in a more scientific manner.

In this machine dependent world, where we are dependent on computers or handheld devices for most of the task in our daily life, that overburdens our eyes and the result one can see in the society with how many people are becoming dependent on glasses for their vision. With haptic augmentation, Sensory overload has been reduced in many applications such as a pilot sitting in cockpit, who is overloaded with flooding information. Another perspective for the usage of haptic augmentation is feeling the virtual or tele-world. Surgeons in a tele-surgery, a gamer playing in virtual reality and even our day-to-day life relies on the vibrational feedback of the mobile phones.

Figure: Mobile Phone Vibrating

Haptic augmentation enables the human being to expand the horizon in the sensory inputs one can receive from the world around. One classic example for this is enabling the users to perceive the lower frequencies which human ears can’t hear. The audible frequency range is 20 Hz to 20 KHz however, existing technology of sound production struggles in producing the audio frequencies lower than 60 Hz. Sometimes, even if they produce these lower frequencies, it doesn’t transmit to cochlea without distortion. Whereas human skin is sensitive enough to detect these frequencies and transmit to somatosensory cortex without being interfered by ambient noise.

Touching the wheat barley

When someone plans to experience nature and walks down to a full grown-wheat farm, the feelings are completely different than what we watch in a movie or see the pictures of it. The feeling of a real wheat field is much beyond just watching the wheat grown in harmony with same color and size or hearing the recorded sound of the wheat barley colliding themselves. This difference is the missing information to the feeling of touching the pinching head of the barley, scrolling the palm through it, the feel of breeze passing through it and smell. Haptic augmentations aim to fulfill these all missing feelings.