Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Structuring topics #167

Closed
iHiD opened this issue Jul 16, 2017 · 26 comments
Closed

Structuring topics #167

iHiD opened this issue Jul 16, 2017 · 26 comments

Comments

@iHiD
Copy link
Member

iHiD commented Jul 16, 2017

One of the learnings that came out of the Reboot process is that helping students find the right exercises to work on is a critical part of making Exercism enjoyable, reducing the barrier and time to learning, and reducing frustration. There are three axes we are initially doing this on: difficulty, average time to completion, and topics.

As such, topics will appear around the site in lots of places. For example in the following two images you can see topic percentage completion on a track and topics on an exercise card. Integrating topics into the site in this way means there are certain "design rules" that we need to follow to make them look good. This means, for example, not having topics that have more than X characters in order to fit them on a line.

screen shot 2017-07-16 at 22 02 26

screen shot 2017-07-16 at 22 02 31

There's also other nice stuff that we'd like to be able to do with topics such as analysing coverage across tracks and looking at cross-corrolations with difficulty, length, completion %, etc. This sort of data might be able to help us find the strengths and weaknesses in each track and help us work out what sort of exercises we could do with adding where.

This leaves me with two linked considerations:

1) We need guaranteed consistency within track

To guarantee constancy I would like to propose that topic names listed in the config.json should be underscored lowercase alphabetic (e.g.control_flows_if_else). This makes them easy to visually scan and allows us to more easily notice differences that when comparing more natural language strings. e.g. "Control Flow (if / else)" and "Control Flow (if | else)". This also allows us maintainers to structure these topic names in a way that makes sense to them but that might not naturally make sense to a student (e.g. the use of the phrase "control flow" is useful for experienced programmers but scary and confusing for beginners who might be better just seeing "if / else")

2) Display names in the UI will want to be pretty and flexible

For example, we might want to put a unicode character in the UI, but we wouldn't want that in the neat underscored track data in config.json. We might also want to rename all "If / else" to "If and else" or "If | else" without having to update the config.json data everywhere. We therefore need a mapping between the topic names that appear next to exercises in the config and the UI versions. We don't want to shoehorn tracks into following certain topics where that's not natural - languages are different - but where there is overlap, we want to utilize it where possible.

Suggestions

After some brief thought I think this could be in three places:

a) As a separate array in the config.json, so the exercises.topics are keys pointing to the topics array. This doesn't help us with cross-track consistency, but will be good for ensuring track consistency.
b) As an array in a global cross-track config file, which enables us to guarantee consistency. This strikes me as great in theory, but a big pain for adding new topics.
c) In the UI database and managed by the UI team, not the track maintainer. New topics that hadn't been seen before would be "titliezed" so "control_flow_if_else" would become "Control Flow If Else" in the UI, and would be flagged to be checked by the UI team, who could then enforce whatever rules they want.

I'm edging towards (c). If we did this then I'd happily go through all the topics across all the tracks so far and write an initial mapping and put PRs out to update the existing configs.

This still leaves me with the question about consistency across tracks, but I wonder if that's just something that we periodically review, rather than having a formal process for.

As a side note, there's an existing file here which contains a set of the current topics in use. I'm not sure how comprehensive or up to date it is.

I'd value everyone's thoughts. (cc @exercism/track-maintainers)

@rpottsoh
Copy link
Member

As a side note, there's an existing file here which contains a set of the current topics in use. I'm not sure how comprehensive or up to date it is.

I refer to it. Looks like the file is 11 months old and was last updated in January. Would this file be updated by replacing spaces with underscores?

@iHiD
Copy link
Member Author

iHiD commented Jul 16, 2017

Would this file be updated by replacing spaces with underscores?

I would suggest:

  1. Updating this file with underscores and the UI versions. Keeping it as point of reference showing that mapping in a visually useful way. Programmatically updating it whenever changes are made on the website (ie new config.json changes are read in or the UI names of tracks are changes).
  2. Generating PRs for all tracks that update the topics with the underscored versions.

@tleen
Copy link
Member

tleen commented Jul 16, 2017

While on the topic of topic consistency going to link in exercism/problem-specifications#820.

@ErikSchierboom
Copy link
Member

ErikSchierboom commented Jul 17, 2017

Updating this file with underscores and the UI versions.

+1

@rchavarria
Copy link

I'd like to see cross-track consistency on topics used. That would help a lot to UI to show consistent topics across different languages.

It looks a good idea to have topics internally managed as lowercase with underscores (control_flows_if_else) while that value could be translated in the UI. So control_flows_if_else will be Control flows (if-else) or Control Flow (if/else).

So far, as a track maintainer, I've been using exercism/problem-specifications#820 as the source of truth for topics. It can be that file or any other file. But, I'd like to have a list of topics, shared by all tracks, that will help us to get cross-track consistency easier.

@ErikSchierboom
Copy link
Member

Maybe we should restructure the existing file to be a set of key/value pairs. Something like this:

{ 
    "control_flows_if_else": "Control flows (if-else)",
    "regular_expressions": "Regular expressions",
    ...
}

This mostly depends on if we want to use the topics.txt (or topics.config/topics.json) file as the source for the UI display of the topics. If we want to do that, the key/value pair makes sense. If you don't want to do that, perhaps just having an array of normalized topics would suffice:

[
    "control_flows_if_else",
    "regular_expressions",
    ...
]

Another thing to consider is if the topics themselves should have some sort of metadata. One could for example "link" similar topics, such that when you are working on an exercise that has the "lists" topic, it could suggest an exercise with the "queues" topic.

@iHiD
Copy link
Member Author

iHiD commented Jul 17, 2017

Thanks for feedback so far :)

My thinking is that the database should be the single source of truth, and that that file should be auto-generated every time the database topics are updated (either because of an update to a track's config.json or because a topic name has been changed in the db).

I think the file format itself should be whatever the maintainers are going to find easiest to use. It could be a markdown file or a json file, could be organised alphabetically or by most popular topic. I don't really have an opinion on this as I won't be consuming it, but think the key is that it makes maintainers lives easier where possible.

Another thing to consider is if the topics themselves should have some sort of metadata. One could for example "link" similar topics, such that when you are working on an exercise that has the "lists" topic, it could suggest an exercise with the "queues" topic.

This is a really great idea. Again, I suspect the right place for this should be in the db, where the full picture is clearest, and we can algorithmically generate some of this stuff.

@rchavarria
Copy link

Sorry for asking, but I don't know what is the database @iHiD is talking about (where to find it,...).

It's not a big deal for me, as long as we have a text (txt, json) representation of topics in that database (that's a lovely idea). I think a small gap in the synchronization is manageable (they don't have to be synchronized every millisecond).

@iHiD
Copy link
Member Author

iHiD commented Jul 17, 2017

@rchavarria - Sorry. I mean the main database of the website. At the moment a new version is under development as part of nextercism.

@rchavarria
Copy link

@iHiD - yep, I thought it was more or less that. So, it would be very nice to have the text representation of the topics. I don't expect it to be done right now, from my side, I can work with the list of topics we have on TOPICS.txt.

I'll be listening to this converstation.

@rbasso
Copy link

rbasso commented Jul 18, 2017

I feel that cross-track consistency of topics isn't something we should pursue, because concepts with similar names have distinct meanings across languages, and similar concepts have different names:

  • Maps and folds have different names in different languages. How would we call them?
  • Which name should we use to describe parametric polymorphism? Will it be easily understood by users of all languages?
    • Templates?
    • Generics?
    • Type variables?
  • ByteString, String and Text in Haskell are all sequences over an alphabet, but the name String is only used for a list of characters. I guess that in R it would be a character type, which may have zero of more characters, while other languages consider a Char or char to be exactly one.

What I'm trying to make clear with those unfortunate examples is that each language has its own vocabulary! A set of topics common to multiple language would feel alien to most of them, and that would just confuse the users.

Also, track-specific topics allow tracks to be more self-contained.

@ErikSchierboom
Copy link
Member

A set of topics common to multiple language would feel alien to most of them, and that would just confuse the users. [emphasis mine]

I would argue that your statement is too harsh, I don't think many topics would be alien to most of them. While certainly true for some topics (the strings are a good example), it won't be true for a lot of other topics. For example, the topics "Booleans", "Sorting", "Abstract classes", "Bitwise operations" are likely similar enough across languages to allow them to be used across the board.

It is interesting to note that when I created the "TOPICS.txt" file, I intended it as a starting point. If the desired topic wasn't there, you could add your own. I do agree that there should be some way for individual language tracks to define their own topics, as it is quite possible that a topic is unique to a language. Maybe each language track should be able to define their unique topics in the config.json file?

@rbasso
Copy link

rbasso commented Jul 18, 2017

I would argue that your statement is too harsh,

Sorry if I was too incisive and over-generalized, @ErikSchierboom. I'm definitely not a very polite person.

Let me rephrase as "...some concepts would feel alien to at least some languages...". 😁

For example, the topics "Booleans", "Sorting", "Abstract classes", "Bitwise operations" are likely similar enough across languages to allow them to be used across the board.

Let's separate those examples in groups, so that we can discuss it further:

  • Theorectical Problems: "Sorting".
  • Mathematical concepts: Both "Booleans" and "Bitwise operations" have mathematical interpretations (1 2),
  • Language concepts: "Abstract Classes".

Generically specified problems can be expressed in multiple languages, so they are exercise-related and also not track-specific.

The mathematical concepts are also not language specific, so they can be shared, unless some language "overloads" the concept with a more specific meaning. The only example I can think of now are the languages that use a ternary logic - like R and SQL - which do not have booleans stricto-sensu, but it wouldn't be so bad to put them together.

The language concepts are far from shareable. Let's discuss your example: "Abstract classes".

Abstract classes belong to the OOP world. While there may be parallels between "abstract classes" and Haskell's "typeclasses", they are completely distinct concepts, because classes/objects/methods and data-types/functions belong to different branches in the languages' tree.

Even if both concepts where the same, I would never use "abstract classes" in a Haskell context, because it's not part of the language's vocabulary.

I'm arguing here that language concepts are track-specific topics, by definition.

It is interesting to note that when I created the "TOPICS.txt" file, I intended it as a starting point. If the desired topic wasn't there, you could add your own.

Certainly we can make a list big enough to make all languages happy, in the sense that no desired string would be missing, but what would be the meaning of that? What sharing a "if...then...else" among languages would mean?

  • Is it control flow?
  • Is it an expression?
  • Is it a function?

The string would be same, but the meaning would be different! IMHO, this is not sharing concepts, just strings.

I do agree that there should be some way for individual language tracks to define their own topics, as it is quite possible that a topic is unique to a language. Maybe each language track should be able to define their unique topics in the config.json file?

Agreed, but wouldn't be simpler to just make all the topics track-specific? People would still be able to copy the desirable topics from other tracks...

I guess I'm not seeing the benefits of having global topics to justify the added complexity of having two sources of topics. Anyway, this is just an opinion and, considering that I'll not be the one maintaining that list, it shouldn't be taken too seriously. 😄

@ErikSchierboom
Copy link
Member

IMHO, this is not sharing concepts, just strings.

I see your point. This is indeed one of the weaknesses of having a "master" list, although we now agree that there are concepts that are likely to be easily shareable.

I would never use "abstract classes" in a Haskell context, because it's not part of the language's vocabulary.

I don't suggest you do :) Haskell doesn't have abstract classes so you wouldn't use that topic. I'm not suggesting the Haskell track should use the topic "abstract classes" because if might superficially resemble the concept of type classes. This is clearly a case where having a track-defined topic would make perfect sense. I'm just arguing that it should be the exception, not the rule.

Agreed, but wouldn't be simpler to just make all the topics track-specific? People would still be able to copy the desirable topics from other tracks...

Well, yes and no. Yes, people can copy things and the tracks have completely liberty, but the main thing you could easily lose is consistency. What prevents a track maintainer from making a copy-paste error, or deciding that "regular-expressions" is prettier than "regular expressions"? This would introduce inconsistencies in the GUI, which is not a very desirable state.

All in all, I'm personally in favor of the hybrid approach, where there still is a master topic list but where tracks can add/override topics. This is similar to the approach that will be used for exercise README's, which have a base value but one that can be overridden.

If we would decide to make all topics track-specific, perhaps we could then have some tooling to check for consistency in naming.

@tleen
Copy link
Member

tleen commented Jul 18, 2017

What I'm trying to make clear with those unfortunate examples is that each language has its own vocabulary! A set of topics common to multiple language would feel alien to most of them, and that would just confuse the users.

This discussion is another facet to the main reoccurring issue of exercism tracks: we want to keep them the same, but they are each unique in their own ways. How do we manage it?

@iHiD
Copy link
Member Author

iHiD commented Jul 18, 2017

Naively, if I consider Ruby, Python, C# and Javascript (4 languages I know reasonably well) and look at the current topics list, 90% of topics apply to all four. I've put a list here of the ones I think are mutually shared.

All of those for me share concepts not strings.

There are obviously examples where two similar sounding things are different. Prototyping is one that obviously springs to mind, both as a language concept and a development methodology. However, by just naming those more verbosely, I don't think we'll have an issue.

We 100% shouldn't shoehorn topics together because they sound similar, and as languages diverge more than the 4 in the example above, then there may be other problems. But there seems to me to be lots of value in understanding when an exercise is about regular_expressions or strings (for example) and being able to compare those exercises across tracks

@rbasso
Copy link

rbasso commented Jul 18, 2017

@ErikSchierboom wrote:

What prevents a track maintainer from making a copy-paste error, or deciding that "regular-expressions" is prettier than "regular expressions"? This would introduce inconsistencies in the GUI, which is not a very desirable state.

The hybrid approach would only solve formatting inconsistencies for a subset (the common part) of the topics, and only by visual inspection. Maybe a schema would be a more appropriate solution for the proposed problem.

If we would decide to make all topics track-specific, perhaps we could then have some tooling to check for consistency in naming.

👍

@tleen wrote:

This discussion is another facet to the main reoccurring issue of exercism tracks: we want to keep them the same, but they are each unique in their own ways. How do we manage it?

You got the central question!

If I remember correctly, recently it was decided that we would move the README.md generation inside the tracks, making the exercises self-contained, which simplifies server logic and allows more customization. In a sense this was a trade-off between complexity and redundance/customization.

Maybe it would be a good idea to push that idea a little further and try to keep the track self-contained. Having only track-specific topics could be interpreted as following the same principle.

There is no clear line to say when should we avoid factoring regularities among languages, but a hybrid approach - that would merge global and track-specific concepts - signals, IMHO, that we should avoid that.

@ErikSchierboom
Copy link
Member

Maybe it would be a good idea to push that idea a little further and try to keep the track self-contained. Having only track-specific topics could be interpreted as following the same principle.

This is absolutely true. It then does hang on the quality of the tooling to catch things like typos etc.

@kytrinyx
Copy link
Member

I think there are a few key questions here:

  1. To what degree should we be aiming for consistency across tracks?
  2. Where should the source of truth for concepts live?
  3. Where should the source of truth for the string representation of the topics live?
  4. Where should the source of truth for the user-facing representation of the topics live?

I think the first question raises a couple more questions:

  1. What purpose does consistency serve?
  2. To what degree are programming languages similar?

The longer I work with Exercism, the more I discover the answer to that last question: much less than I originally imagined. So no matter how much consistency we operate with where this is natural, I want to make sure that we are optimizing for the weird and wonderful quirks of each individual language. I don't know if we'll ever want piet on Exercism, but it should be possible to make it so without jumping through painful hoops.

The other question, what purpose does consistency serve, is different. I think that where possible and where it makes sense, consistency can help in two ways. On the one hand, it can help reduce the maintenance burden, and on the other hand, it can help the end-users make connections and wrap their head around things.

But there's a trap here. If a concept is the same in different languages, it helps to use consistent language. If it's not the same concept, then consistent language is actively harmful.

So I think that we need to be very clear about what we want to be consistent about (if anything), and why (maintainers vs end-users). And ideally we'd get some optimum mix of all of those things.

Once we have clarified the degree and purpose of consistency, then it would make sense to consider how to encode them and where they should live, and who should be responsible for them.

@kytrinyx
Copy link
Member

There's been no further discussion here since I attempted to refocus/reframe the discussion.

I'm going to make a few suggestions so that we can try to get this wrapped up and implemented into both the tracks and the nextercism prototype.

Philosophically I think that we should not try to enforce consistency between different language tracks, but where there is real overlap in concepts, I think that we should strive to be consistent. Most of all, we should be true to the language of the track. While the concept of Python's dictionary is similar to Go's map, which is similar to Bash's associative array, which is similar to Ruby's hash, it would be confusing to use the same term across all of these tracks.

In terms of implementation, I think that it would make sense to keep the topics array in the config.json, and to strive to make this as scannable and consistent as possible by normalizing to use only lowercase alpha(numeric) characters, with words separated by underscores or hyphens. I don't have a strong preference. I find that underscores are easier for people who use a mouse, because you can double-click them, whereas I find hyphens slightly more readable.

If there are no good reasons for one over the other I would suggest just flipping a coin and choosing one.

Normalizing the strings of the topics is an optimization for maintainers—it makes them easy to compare and it makes it harder to make weird variations of them (e.g. ControlFlow If/Else vs control-flow if|else vs control flow if/else vs Control Flow (If/Else) etc).

In terms of consistency, I think that it makes sense to use the TOPICS.txt file or some other similar file to list common terms. We can have some tooling to warn about topics that are similar to each other so that we can discuss if they are the same concept or not. Maintainers will (of course) not be constrained to use only topics that are listed there, but it can be a reference to help keep consistency where it makes sense.

We still need to consider how topics are displayed to people on the website.

Of the options that @iHiD listed in the original post, I strongly prefer (c):

c) In the UI database and managed by the UI team, not the track maintainer. New topics that hadn't been seen before would be "titliezed" so "control_flow_if_else" would become "Control Flow If Else" in the UI, and would be flagged to be checked by the UI team, who could then enforce whatever rules they want.

This would mean that we could have a handful of people whose responsibility it is to check for new topics to name for the website, and they could be given an interface to do so, and we can optimize their workflow.

If you have strong feelings about this, or feel like I'm missing an important point, I'd like to hear it. If you think this is a good approach, but don't have any additional thoughts—would you mind giving this comment a 👍 ? That way we can see if there's more-or-less consensus.

Not that we need total 100% consensus. I don't know if that's possible. But I think that we can find something that is the best possible balance of concerns (and write tooling to solve some of the painful edges and corner cases).

@petertseng
Copy link
Member

What purpose does consistency serve?

If a concept is the same in different languages, it helps to use consistent language. If it's not the same concept, then consistent language is actively harmful.

Maybe it helps to state this as "It is helpful to know that topic X from language A and topic Y from language B are or are not related".

One way to achieve that is to have X and Y be displayed as the same string if and only if they represent the same concept.

Another way may be to link between topics in some way ("X is also known as Y in this other language"? "X is not to be confused with Y in this other language that has the same name but means something different"??? I don't know).

For me, it would be helpful to know what's related and not, but not a dealbreaker if it's missing.

Most of all, we should be true to the language of the track. While the concept of Python's dictionary is similar to Go's map, which is similar to Bash's associative array, which is similar to Ruby's hash, it would be confusing to use the same term across all of these tracks.

We still need to consider how topics are displayed to people on the website.

I'd like to use the above example to ask a question. Let us suppose that there is some Go exercise that uses maps and it wishes to express that in its topics. And there also exists a Ruby exercise using hashes and it, too, wishes to express that.

  • Will these two tracks use the same entry in their topics arrays?
  • Will they be displayed in the same way on the website?

I don't think I am likely to object to any of the 4 combinations of answers, so this is merely out of curiosity.

@lizTheDeveloper
Copy link

Hey everyone, I showed up to weigh in here on this problem that is actually really hard!
I've been writing curriculum for a number of schools throughout my lifetime as a teacher and engineer- namely Hackbright, Galvanize, Tradecraft, and Girl Develop It.

Turns out tagging exercises and other learning materials with a normalized mental model that works between languages is really difficult. I've actually been working on this with some education PhDs and it turns out that this is a formal problem in education called Topic Scoping and Sequencing.

Basically what you have to do to get this to work is to agree upon a shared set of concepts. You also have to determine what the specific level of knowledge is - using a function is different than writing a function, which is different than decomposing a function into several, and analyzing a function to determine whether or not it is a pure function is needed in JavaScript, but not in Haskell. Formally defining learning objectives in an abstract way is hard.

The reason this particular problem probably feels like a trap, and the reason that languages seem so dissimilar, is because this actually requires several experts sitting down and agreeing on similarities and then specifying differences. It's rough to pull off because getting experts to agree is hard.

That said, I have attempted to do so here: https://github.com/sagelabs/standards
We've recently migrated the specification here (linked to a specific example): https://raw.githubusercontent.com/enkidevs/curriculum/master/web/html/README.md

These are learning objectives- some of which revolve around the use of the structure of the language, some of which revolve around problem solving. You can see some of the similarities, and some of the differences between Python, JavaScript, and Java- but of course CSS shares next to nothing with the aforementioned.

This is because different languages are for different things. Tools are developed when existing tools are inadaquate or insufficent for the job (or as art). There are many hammer-like tools, all of which do slightly different things- a sledgehammer could be used to hammer a nail into a wall, as could a claw hammer- but one will be more straightforward to use. Both require users to "Use the head of the pounding device to insert a nail into a surface", but one also has the ability to "use the claw to remove a nail from a surface". There are similarities between tools, but each tool has it's own learning objectives.

This leads me to believe there are a set of shared mental models, I would call these "meta-programming skills" but I'm sure there's a better name.

But why are they valuable? What's the point?

The issue is sequencing. Sequencing is difficult because what you want is to give learners components, with which they can assemble solutions. You need them to have all of the existing components for a problem specification (eg, if it's string and number parsing, they need to know about strings, numbers, and the control structures required to parse them). Presenting a learner with a problem for which they don't have all the components may set them up for failure. Some learners can overcome this and recognize the components they have, but many beginner learners don't have the awareness to recognize the components they understand.

Sequences aren't universal- there is no "best sequence" as far as I can tell. You don't build a "better" foundation by starting with presentation concerns, for loops, string parsing, or networking protocols. There's no specific place to start, without an end goal in mind.

This is why sequences are based around the end goal- someone who wants to be a front-end developer should start with presentation concerns, someone who wants to develop new algorithms might start with control structures.

So my suggestion is this- let's collaborate on building a set of goal-based sequences of exercises, which differ depending on the start conditions.

This is also why you tend to have four levels of content in most LMS: Tracks, which are composed of multiple Topics, which are composed of Subtopics, which are composed of Units. I'm speaking abstractly about all LMS here- Topics would be "Java" or "CSS". Subtopics are arbitrary subdivisions in a Topic, and Units are "the smallest group of curricula", eg, one "session", or one day worth of content.

Tracks are a higher-level organizing concept, consisting of content from multiple Topics. This is what helps you organize and compose the learning content into a suggested sequence, and tie together related concepts. If what you want is for users to grasp commonalities between multiple Topics, a Track could help you make that happen. If what you want is to teach them how to write a simple webpage, a Track can sequence that appropriately too. If you want users to distinguish between the performance of different data structures, a Track can do this, and it doesn't really "disturb" topics, or require you to change the spec much at all. It also modularizes everything.

My suggestion is to determine a way to refer to specific curriculum elements and add them to a custom sequence.
Here's an example of that in Enki Content Format:
https://raw.githubusercontent.com/enkidevs/curriculum/master/javascript/core/README.md

Maybe this helps. 🤷‍♀️

@kotp
Copy link
Member

kotp commented Jun 12, 2018

I am going to go out on a limb here and define "LMS" as Learning Management System. I had to reread the comment to make sure I did not miss the definition before the acronym was used... I am guessing that is what was meant by "... have four levels of content in most LMS:". Can you confirm, @lizTheDeveloper ?

@iHiD
Copy link
Member Author

iHiD commented Jun 12, 2018

@lizTheDeveloper This is super interesting. I'm looking forward to getting a free hour and really considering all this. Thank you!

@kytrinyx
Copy link
Member

@lizTheDeveloper This is fantastic. Thank you so much for laying out all of this in such a structured way. It's been a huge help in getting my head wrapped around the enormity of this and understanding that this is legitimately hard, not just incidentally hard.

let's collaborate on building a set of goal-based sequences of exercises, which differ depending on the start conditions.

I would be so excited to work on this. My first priority is to get v2 shipped and live so that we're not maintaining two codebases, and so that the site is usable, but once that is off my plate my next two concerns are (a) ensuring that we have the right resources and tools for the mentors to ensure that the experience is great on both sides of those conversations, and (b) improving the curriculum itself.

@kytrinyx
Copy link
Member

kytrinyx commented Aug 3, 2018

We've imported this issue to the https://github.com/exercism/exercism.io repository.

@kytrinyx kytrinyx closed this as completed Aug 3, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

10 participants