On The First MVP version 0.0.1 #1
KennedyRichard
started this conversation in
General
Replies: 1 comment 4 replies
-
After experimenting with it a bit, I was able to make a couple of boxes, but I never managed to get a label widget, nor a checkbox: So some sort of fall-back mode where when the confidence is low a list of possible options in order of likelihood may be warranted. |
Beta Was this translation helpful? Give feedback.
4 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hello, everyone, here's the first MVP version of myappmaker (working title)!
Of course, being an MVP, it represents just the first few iterations and thus there's much to improve/refine/refactor and even possibly redesign. Regardless, I think this current state is enough for a first MVP version. Please, after you read, allow me to know what you think of it so I can improve/fix in the next iterations.
The MVP
Following our incremental development strategy, I'm implementing each feature one at a time. Naturally, this first MVP version portrays a single feature: drawing recognition or, in practice, the ability to draw on a canvas and have that drawing replaced by a corresponding widget.
Here's a quick preview:
And the full preview:
drawing_recognition.mp4
In other words, while pressing the shift key, you can draw on the canvas and, once you are finished, you can release shift. The drawing is then analyzed and converted into the corresponding widget if there's a match.
The drawings must be defined by the users beforehand in a "strokes settings" dialog, and they have freedom to use any drawing they want for each widget, drawing the individual strokes that form the final drawing. Here's how it works:
stroke_settings.mp4
Just like when drawing on the canvas, in order to draw on the small area to associate a specific drawing with a widget, the user must keep the shift key pressed. Once the key is released, the drawing on that area is associated with the corresponding widget.
The README of the app's repo has instructions on installation. Note that in addition to the
main
branch which has all commits squashed for simplicity, the repo also has afeat/drawrec00
branch, where all commits are kept separate for the sake of transparency, so that you can inspect my work in more detail. Remember though that my time on the Indie Python project is not only spent on development, but also includes many other tasks like technical writing (like this MVP presentation you are reading that took me almost a week to finish), content production and marketing, planning, research and user support.Under the hood
Representing and recognizing drawings
The drawings are represented by the individual strokes that form them. Such strokes, in turn, are just polylines. That is, each stroke is represented by a set of points that, in turn, represent line segments linked to each other by their ends to form that stroke.
Warning
When we say "set" of points, we are reffering to sets in the mathematical sense, not the Python collection
set()
. In practice, we usually use Pythonlist()
andnumpy.array()
collections to hold the points, depending on our needs.This gives us more speed and simplicity then we'd have if we were to use image processing to compare the drawings. Instead, our analysis only requires us to compare the set of points from the drawing performed on the screen with the set of points of the existing drawings to find a match. Considering this, we might say that our drawing recognition solution relies on how much two sets of 2D points overlap. Maybe a more precise name would Recognition of points distribution.
For that, we use the Hausdorff distance, more specifically, the symmetric Hausdorff distance.
Let's imagine a challenge between you and a friend (your opponent). In some room you chose,given a set of spots A and a set of spots B, your challenge is to pick one of the spots in A that would require your opponent to walk more to reach any spot in B. Granted, regardless of the spot in A you pick, your opponent will try to reach the closest spot in B. Because of that, your only choice is to pick a spot in A whose closest spot in B will lead to your opponent walking the highest distance.
Such longest distance between a point in A to its closest point in B is what we call the Hausdorff distance from A to B. We call it a directed distance, because it has a direction, that is, from A to B.
Here's a simulation where we demonstrate the Hausdorff distance from points in a set A that we move randomly over time to points in a set B:
As shown in the simulation, the Hausdorff distance from A to B is determined by the point from A that is most distance from its closest point in B.
Thus, if the two sets of points overlapped, the Hausdorff distance from A to B would be 0, meaning the sets of points were identical. That's why the Hausdorff distance can be used as a measure of similarity, or rather, of dissimilarity, since the higher the distance, the more dissimilar the sets are.
However, only considering the Hausdorff distance from one set of points to the other isn't enough to assess how similar the sets are. For instance, what if the first set was actually a subset of the second one? That is, what if it the points in A were on top of some points from B or at least very close? Here's a representation:
That is, when the first set (the smaller one) is on top of the second set or close, although the Hausdorff distance from the first to the second set is 0 or almost 0, we can still clearly see that the sets aren't actually similar (again, A is much smaller than B).
Here's the exact same simulation, but this time we use the directed Hausdorff distance in the opposite direction, that is, from B to A, from the second set to the first one:
As shown in the simulation, if we can pick a spot in B to place our opponent instead of a spot in A, we can still force the opponent to walk some distance to A. Even when A is almost on top of B, the Hausdorff from B to A still shows some considerable dissimilarity between the sets. In other words, although the directed Hausdorff distance from A to B is very small, the directed Hausdorff distance from B to A is not, and it represents a more accurate representation of the dissimilarity between the sets.
That's why, in order for us to effectively measure the dissimilarity between two sets of points, we must only consider the highest Hausdorff distance:
max(hd_from_a_to_b, hd_from_b_to_a)
. Because this new measure takes into account both directions (from A to B and from B to A), we call that measure the symmetric Hausdorff distance.Let's now replay the simulation, this time showing both directed Hausdorff distances, from A to B and from B to A:
Which one should we consider as an accurate measure of the dissimilarity between the sets? Whichever is higher! In this case the Hausdorff distance from B to A, which we happen to call the symmetric Hausdorff distance.
Finally, here's a representation of how the drawings are aligned in order to be compared:
In other words, we align the drawings using their first points. After this all we need to do is calculate the symmetric Hausdorff distance and determine whether such distance is close enough for us to consider the drawings similar. Remember, the Hausdorff distance is a measure of dissimilarity, so, in our case, the smaller it is, the better.
Additionally, it is not a good idea to only match a drawing when its comparison yield a symmetric Hausdorff distance equal to or near 0. That is so because most people likely won't be able to reproduce a specific drawing perfectly, so the symmetric Hausdorff distance will usually be considerable well above 0. The demonstration above is a clear example. Note that the strokes didn't align perfectly, so the points are considerably distant in various parts of the drawings.
So, instead, we arbitrarily decide a value that is the maximum symmetric Hausdorff distance we are willing to tolerate. Any distance below it is considered close enough and thus the drawings are considered similar enough to be considered or, more precisely, not dissimilar enough to be ignored.
Currently, in the MVP, I set a hard value that was confortable to me when drawing, just for the sake of testing the feature, but in the next iterations I'll allow users to edit such value. This way they can try different values and choose for themselves the most comfortable tolerance. This is specially important when the user wants to use smaller, more precise drawings, in which case the user should work with smaller tolerance values, to avoid the small drawings to be confused with one another.
Libraries and functions/classes used
Despite relatively easy to calculate, the directed Hausdorff distance requires a large number of small calculations. At the beginning, we need to calculate the Euclidean distance from each point in one drawing to each point in the other drawing. This means comparing two drawings of 100 points each requires 10000 calculations Euclidean distance calculations.
Then, for each point in the first drawing we need to find the closest point to it in the second drawing, and finally get the highest of such distances (again, the maximum distance we can force an opponent to walk from one set of points to the other, assuming the opponent will choose the closest point in the other set that is closest to the point wherein we place the opponent).
With that, we are finished calculating the directed Hausdorff distance from the first drawing to the second one. Now we just need to repeat all the calculations, this time for the Hausdorff distance from the second drawing to the first one, and use the highest of the two Hausdorff distances we calculate as our symmetric Hausdorff distance.
And all of this is just one comparison. In our daily usage, we'll likely need to compare a user drawing with several others, not just one.
As you can see, this is a lot of calculations.
Thankfully, such calculations can be significantly sped up by representing the drawings with numpy arrays, from the third-party Python library numpy. Operations designed for numpy arrays use well-optimized C code, thus making the otherwise time-consuming calculations needed for the Hausdorff distance a breeze. The next step is to create or find an implementation of the Hausdorff distance that takes advantages of numpy arrays. Such implementation already exist in a third-party Python library called scipy, namely, the scipy.spatial.distance.directed_hausdorff function.
Both numpy and scipy are widely used, trusted Python projects, so having such libraries as dependencies shouldn't pose a problem. Other Python packages also offer functions to calculate the Hausdorff distance, like shapely and scikit-image (this one has scipy as a dependency as well). Those packages may be considered in the future as well, but for now the combo numpy+scipy seems effective and performant enough.
Python and pure C versions
Despite the availability of many Python third-party libraries for the Hausdorff calculations, given the relatively simplicity of such calculations, I think it is worth to consider implementing our own solution. As a software designer/developer and maintainer of open-source projects, I also recommend and promote the practice of relying as little as possible on third-party libraries.
Although we are talking of trusted libraries like numpy and scipy, depending on external libraries when it is avoidable leads to extra work to keep our own project free of inconsistencies/incompatibilities between different versions of our software and the different libraries. When the libraries are truly needed, then such work is justified. Otherwise, it is just a waste of time and effort that could be used in other areas of our project.
On top of that, in our case, we are not using a considerable amount of such libraries's APIs. In fact, we only ever use calls to
numpy.array()
and toscipy.spatial.distance.directed_hausdorff()
. Two whole libraries for the sake of only two callables.Because of that, I actually tried my hands at creating a pure Python implementation. The first version, without possible optimizations that I can still explore, already operates only 50 times slower than the numpy + scipy solution. For now it may still not be good enough to replace the numpy + scipy solution, but future improvements may reduce this gap, even if we never end up relying on it.
The fact we may not rely on it in the future, even if it improves, is due to the possibility of implementing such functionality in pure C. Since Python can interact with C code, and as explained, the Hausdorff distance only requires easy calculations (the problematic factor being the high number of such calculations needed), a small C module should probably be more than enough to provide the Hausdorff distance calculation we need without needing to rely on third-party libraries.
Despite being something I want to pursue, though, I might not do so now, for the sake of keeping working in other more pressing needs of the myappmaker project. However, in the future, I do intend to set aside some time to try this. In fact, a quick Google search for such algorithm I performed a few days ago was sneakily taken as an AI prompt that produced most of the code required in just a few lines, not much different than what I had in mind, might I say. This is good indication that I was on the right track despite not being much fluent in C.
I might end up actually creating and implementing such solution on a whim one of these days, since it is something that actually interests me, but for now I'd like to keep working on more urgent tasks within myappmaker and other child projects of the Indie Python project.
Optimizations and simplifications
So far, we can assume that in order to compare a drawing from a user with the existing drawings associated with widgets to find a match we just align the user's drawing with each existing drawing and calculate their symmetric Hausdorff distance, then pick the lowest one if it is below a tolerable value.
However, we don't need to do so for all existing drawings. There are a number of filters we can apply to the set of existing drawings in advance, in order to reduce the number of drawings to compare.
First, we can use the number of strokes. Only drawings that take the same number of strokes as the drawing from the user are considered.
Second, we can use the width/height ratio of the whole drawing and its individual strokes in order to compare them with the width/height ratio of the existing drawings that "survived" the stroke count filter. For instance, the image below shows the drawing I use for a label to the left and the image I use for an unchecked checkbox to the right. For the drawings associated with widgets, such ratios can be calculate in advance when such drawings are set by the user. Then, when the user draws something on the canvas, that drawing and its individual strokes can have their ratios measured and compared with the pre-calculated ratios of the existing drawings.
Additionally, just like with the Hausdorff distance, we also consider some tolerance, since the width/height ratio vary from each attempt of reproducing a drawing. This tolerance appears in our algorithm in a number of ways. First, instead of the actual width/height ratio, we use the natural logarithm of such ratio (in Python this is easily calculated with
math.log(ratio)
. Put simply, the natural logarithm is used in math to make variations more uniform by taking into account how large or small the original value was before it was subject to that variation. Second, we then pick an arbitrary tolerance value in order to compare such natural logs of the ratios with each other, one that allows us our drawing to be interpreted with a good rate of accuracy. This tolerance value is also currently set as a hard value in the MVP, but will also be available for users to edit in next iterations.Also, not a tolerance, but another measure that is taken is that when one of the dimensions of a given stroke or the whole drawing (that is, the width or the height) is more than 10 times larger than the other, for the sake of simplicity we consider it only 10 times larger, even if in practice it is 11, 12, ..., or 50 times larger. In other words, we just want a simple way to define whether a stroke or the whole drawing is oriented horizontall, vertically or whether it is more like a square. Originally I considered using only these labels to define the strokes and drawing, that is,
"horizontal"
,"vertical"
or"square"
. However, in the end using numbers is unavoidable, as we need some measure of tolerance. The reason is that, as we already know, when people are trying to reproduce a drawing, the actual dimensions and thus the ratios end up with variations, so a drawing that is supposed the have "square dimensions" may end up with one of the dimensions larger than the other.Another measure we take to simplify/optimize our calculations is to not include points that are too close to each other in the drawings. In other words, when users are drawing, whether they are on the strokes dialog setting a new drawing for a widget or on the canvas drawing so that the drawing is replaced by the corresponding widget, we actually discard neighboring points during the action if they are too close to each other. This is so because it doesn't result in any drop in visual quality perceptible by the naked eye while at the same time reducing the number of points for our Hausdorff calculations.
In conclusion, we avoid needless processing and save time by employing simple and relatively simple filters and simplifications in advance, before having done a single Hausdorff distance calculation.
Testing with the mouse
I actually never tested or refined the feature using a stylus pen, as is the intended use for this feature, even though I do have a stylus pen and accompanying tablet available (which I use for drawing pixel art assets for gamedev). The reason is very simple: design experts test physical products under non-ideal circumstances in order to accurately assess their usability. This is a great technique that I thought to be suitable for testing our MVP, despite being a digital product, since it ultimately relies on the hand coordination of the user.
This usage in non-ideal circumnstances practiced by design experts helps highlight usability deficiencies earlier so products can leave the drawing board with no or less usability problems.
For instance, design expert Dan Formosa often employs the "left-handed oil test", which is when, before using a product, he spreads oil in his hands in order to make his non-dominant hand slippery. If he can effectively use a product even with a slippery left hand, it is indication that the product has good usability. Here's a YouTube video showing it (you don't need to click the video, it is here just in case you are curious to see the technique in practice; in case you click it, you don't even need to watch it entirely, as the link leaves you at the right time in the video when he employs the technique the first time):
In the case our MVP, I thought using the mouse would make for a suitable "left-handed oil test".
In addition to that, the feature is simple but useful enough that many users may actually even find it desirable to use the feature with the mouse. For instance, in pixel art, specially in the lower resolutions, the mouse is often more than enough to work effectively in a specific asset. I actually use the mouse way more often than the stylus pen and tablet for my gamedev art.
Moreover, relying on the mouse means the user doesn't have to switch peripherals when use the computer. I assume, for the vast majority of people, the mouse is always plugged anyway, whereas the tablet is only plugged for specific work sessions that require it, specially in laptops, where the number of USB ports is limited.
Even better: relying on the mouse erases none of the benefits of using the pen stylus and tablet. On the contrary, people can still work exclusively with the pen stylus and tablet or switch to them when desired. The pen and tablet would still retain their usefulness and effectiveness, while providing even more precision of drawing for those that so desire/need, allowing users to operate with lower tolerance values and thus allowing them to work with more detailed drawings.
Of course I'll test the feature with the pen stylus as well in the next iterations, so we can be sure that there are no problems on that end too. I'll do that despite considering unlikely that a feature with good usability when relying on mouse movement while pressing its left button will also not present higher usability when relying on pen stylus + tablet, which are expected to allow for more precision for the user. As designers/developers we must be thorough.
It is also important to add that because I consider the mouse a peripheral that many users will rely on for their usage of myappmaker I use a relatively large area for the user to define drawings for the widgets, given the lower accuracy of the mouse. It is not that I consider such areas large in themselves, it is just that if the feature were meant solely for usage with a stylus pen then a smaller area would suffice. Again, the bigger area doesn't hinder working with a pen stylus. Quite the contrary, the bigger area means pen users can use even bigger and more detailed drawings. In other words, the large area allow effective usage of both peripherals.
Would it be a good idea to allow users to define such area? I don't think that's needed, but wanted to call attention the possibility anyway.
Stroke order
You may or may not have noticed, but since the width/height ratios of all strokes united as well as each individual stroke are also taken into account when analysing the drawings, stroke order is crucial. Additionally, since the first point in each drawing is used to align them, the direction of the first stroke is also important. For instance, let me reproduce the last simulation, drawing the same strokes, but using a different orientation/direction for them (i.e. I start the first drawing from the topleft corner of the square and the second drawing from the topright corner):
You may be thinking that perhaps I should instead align the drawings by the center point of their bounding boxes. That is feasible, but I'm not sure it would allow for more precision. You see, the way we write and draw seems to be ingrained according to how we practice writing/drawing such shapes. Not only the shapes themselves, but the order/directions used seems become part of the act of drawing it.
Likely, it is no coincidence that both artists and users of languages that rely on ideograms (like Chinese and Japanese) or more "complex" phonetic symbols like Korean Hangul/Chosŏn'gŭl emphasize the importance of following specific stroke directions. For artists, such directions are the ones that take best advantage of your hand coordination, so much so that it is even recommended that you rotate you hand slightly or even the paper in order to keep such comfortable angles/direction, with the software counterpart even providing a feature to rotate the drawing on the screen for such end. For complex characters in a specific languages like the ones mentioned, specific stroke orders are taught in order to guarantee uniformity of results, much like in art.
Because of that, I think there is no harm in enforcing stroke order/direction, specially since, like in art, we implemented the feature so that users can draw the patterns used however they want. In other words, users can already make sure they are providing drawings that are drawn how they like, with strokes in the order/direction of their preference. And as I just said, stroke order helps with uniformity.
An unintended but welcome side-effect is that drawings with the same strokes can coexist, as long as the stroke direction and/or order is different. You may argue: "But wouldn't that be confusing?" Perhaps to an outside observing you work within the app. But not to you, who defined the drawings yourself on the strokes dialog and know your own drawing practices/conventions. Whether or not to use this is up to you anyway.
Further considerations/analyses
Other comparison algorithms/paradigms (using images)
Although using sets of points offer much simplicity and speed, it doesn't mean I discarded the possibility of using image recognition for recognizing the drawings. From the perspective of accessibility, we must assume that many individuals, regardless of adverse conditions or not, may not be able to draw with much precision. Because of that, image recognition solutions may be more appropriate for specific cases. For now, that is not something set in stone though.
About all that, 03 considerations are relevant. First, by using the word "may", I mean that the tolerance value used in our recognition of 2D points distribution solution already accounts for that difficulty in drawing precisely and might be enough or maybe, it might be that it offers as much flexibility as image recognition would. Second, such additional solution shouldn't replace the current solution. Rather, users should be able to use whichever solution they see fit considering their needs. Finally, I suspect our current solution that uses recognition of points distribution is likely to be much more adopted by users than an eventual image recognition solution, because our current solution is performant, simple and effective.
As previously discussed, our solution can likely even be improved to use only our own C code, not even having to rely on third-party libraries anymore, whereas an image recognition solution will most likely:
The last point ("less performant") is debatable, though, as the drawings are relatively small and our techniques employed to simplify and optimize the recognition (for instance, like checking the ratio of the drawings before comparing them) can also be used for the image recognition solution. Even so, all things kept equal, I think it is safe to assume that comparing two sets of points will, in the vast majority of cases, be much quicker than comparing two images of the same size.
Additionally, such image recognition solution, given the higher number of dependencies, higher processing times and (assumed) less wide adoption, could likely be kept as an extension. For instance, something that the user must install separately. That would actually be relatively simple for users of myappmaker, requiring only a single extra command (the one to install the additional dependencies) and again, just like the mouse doesn't hinder the usage of the pen stylus, having recognition of 2D points distribution as the default and "built-in" solution won't hinder the usage of an alternative image recognition solution.
Another final thought is that, I must remind you, image recognition solution may not even offer precision superior to that of our current solution, given our usage of tolerance measures that much alleviates the problem of unprecise drawing. Because of that, I'm not certain we should pursue this possibility so soon, given our enormous size of the myappmaker project and the already big list of things to implement for it. However, because we do value accessibility, I don't want people who may need this additional solution (provided it does indeed improve precision in a way that our solution doesn't) to be left unattended.
So, for now, my recommended measure regarding all of this is: let us in fact postpone this and focus on the next tasks/steps, but at the same time, if people do reach out to us asking for this feature or complaining about the precision of our current solution, we then pause whatever we are doing and give this new solution priority instead.
As always, everyone's feedback on this is much appreciated.
Keeping the shift key pressed while drawing
Regardless of how the comparison is done (whether by points or by image recognition), another decision we must make is regarding how the drawing is triggered and input. At first, the drawing could be initiated any time the user wanted by holding the mouse left button and dragging it around the screen. After the end each stroke a timer would start and if another stroke wasn't drawn again in a couple seconds, the drawing would then be collected for comparison.
This was not bad per se, but had a number of disadvantages. First, users couldn't draw freely without worrying about their timing, which may lead to even more imprecision. Second, regardless of the delay employed, there's always people who want their drawing input right away instead of waiting. Finally, by allowing the mouse click and drag to be automatically associated with drawing we lose other useful mouse functionality. For instance, the mouse could be used to click and drag widgets to move them, or simply click a widget to change its state (like check button/boxes whose value is toggled when we click them).
Because of that, I changed this so now users can take as much time as needed. All they need to do is pressing the shift key and start drawing. Then when the users are finished, they can release the key and their drawings are analyzed right away, no need for waiting. And the mouse can still work as usual, for clicking, moving, selecting things.
The feature picked
After many discussions regarding myappmaker's design with William, both in our early email exchanges and then on Github discussions, the thing that caught my attention the most about myappmaker was the usage of drawing. Since the beginning, William pointed out how programmers are often recommended the usage of pencil and graph paper to design interfaces given the lack of suitable digital tooling. Later, he offered even more details, for instance, about how drawing could be mapped to programming and interface elements, like using drawing to insert UI elements/widgets.
There's actually so many aspects already discussed and listed for research regarding drawing that we stil have much work ahead on this front! However, that's discussion and work for the next steps. For now, I'm glad we already managed to take the first small step toward our dreamed 1.0 version. As I said repeatedly since the beginning of our discussions at the end of September when this project was greenlit, being able to produce an early MVP/alpha is the mark of healthy project, because it means the people heading it are able to put what they are discussing into practice, that is, they have at least some idea of what they are doing and at least some competence to do it.
Getting back into talking about the MVP, as I hinted in a subsequent reply, I did implemented the drawing feature in a way that users can freely define the drawing they want associated with each specific widget and its configuration:
Regardless of the final form the feature will end up taking in the future, I'm also glad for the fact that we already have a working version. This is so because, as the developer heading the app's development, the 02 features that, given my knowledge and experience, seemed to be the most challenging to implement were drawing recognition and the design of the system representing the app being designed within myappmaker in order to allow fine-grained control and interfaces like inspectors, etc. In just a few weeks we already have a functional version of the first feature, even if it happens to change in the future. The second one, however, will of course require much more research and iterations before I can present a functional version, but such time will come. For now, let's focus on the next closest steps.
Planned expansion for the drawing recognition feature
William, this idea of yours to use drawings to insert widgets is truly amazing. I already agreed on it on the spot when you first presented it, but the more I worked on it, the more I realized the immense usefulness and potential of the feature. Don't you agree with me that this feature is expressive enough to allow communication with the whole app, and not just for widget insertion? If we think about the feature differently, it is actually a different way to interpret mouse (or pen) input. It just happens that you found it useful to use it to define widgets. However, if we think of the feature as a way to communicate with the system, that is, a new input method, just like the keyboard or the mouse, our possibilities expand considerably.
For instance, in addition to inserting widgets, if we cross a widget on the canvas, it could be deleted. Or if we circle a widget and then trace a specific pattern, we might trigger some sort of specific configuration for that widget. In other words, this feature could give us even more expressiveness, flexibility, usability and power.
That's why although this first MVP version presents a stroke settings dialog that is used for associating drawings with widgets to be inserted, I intend to expand the use of drawings on the app so that such dialog will actually associate drawings/strokes with actions. It just happens that some of those actions will consist of inserting a widget.
What do you all think of this?
This expanded usage of drawing recognition isn't anything new actually. It is just that working with drawing recognition for widget insertion as we are using now in the MVP reminded me of another piece of software I came across and used very briefly several years ago, called Easystroke Gesture Recognition, on Ubuntu (GNU/Linux OS). See it in practice in this video from 2010 (15 years ago!):
I already knew about the usefulness of gesture recognition since long ago. However, probably because nowadays I use the keyboard to perform tasks on my computer way more often (even more often than the mouse), for things like browsing my system, perform all kinds of OS tasks and switching between the apps and windows, I probably forgot all about such gesture recognition app. Also, since I'm a developer, most of my time spent on the system is on the text editor writing or on the browser reading/writing/posting, rather than moving around the system and its apps/windows, so I don't even miss such app in my current daily use of the PC.
However, for specific apps that are way more visually oriented, like myappmaker, which will have several visual interfaces with visual elements and widgets (like the canvas for laying widgets), this expanded usage of drawing recognition to trigger actions have much, much potential to improve our usage.
I myself already want to implement this feature in other apps of the Indie Python project. In Nodezator, for instance, it could be used to instantiate specific nodes or deleting, connecting nodes. In gamedev, I think it would be awesome to use it as a way to insert specific assets inside a level editor application, edit them, etc. The potential is enormous.
Concluding
Well, there may be things I forgot to mention, but that's what I could remember. When dealing with such detailed projects and their systems/features, it is almost impossible to remember every aspect and properly comment on them. Because of that, if I missed anything or you need anything clarified, just let us know. Additionally, your feedback is both welcome and needed, so please I'd love to hear what you all have to say about all this and about your experiences testing the MVP.
As an open-source maintainer, I just want to highlight this: although part of the goal of the Indie Python project is to promote and provide value, which motivates us to produce effective and practical code, learning is also part of our goal. Regardless of whether our current "points distribution recognition" solution remains like this or is changed or even replaced in the future, I was only able to come up with such idea and corresponding solution because of my background in both art (I had art lessons in the past, on top of studying it by myself in recent years) and Japanese reading/writing (which I also studied by myself, although I'm not skilled, let alone fluent). The lesson is that as software designers/developers we must always strive to learn new things and value understanding things over speed. That is, it is better to spend more time on a task and understand it properly (its requirements, related concepts and possible solutions), than rush it just for the sake of meeting deadlines (although, of course, we must strive to also meet them when possible).
I think this feature provided in the MVP already makes for an excellent showcase of the potential/future capabilities of myappamaker and can already be shown in the project's social media and other channels in order to start attracting new users/followers of this app. We can even use Williams original argument to draw attention to it: "pen and graph paper? Why not draw on the screen directly?", or something like that. As I mentioned previously, I'll likely produce a few showcases of this feature in Nodezator and in my gamedev in-house tools (temporarily in-house, as I intend to publish them once they are a bit more polished; more specifically, the level editor I use for Bionic Blue).
Speaking of pen and graph paper, as you could see, the MVP only showcases the ability of the drawings to be recognized and replaced by default-looking widgets. Our next steps will likely include things like customized hand-drawn looks for the widgets as well, in order to provide a more friendly-looking interface, as well as several other themes, like the LCARS theme requested by William.
Beta Was this translation helpful? Give feedback.
All reactions