Self Critique

What did we do well?

We refactored our website a lot to avoid rewriting the code this time. A lot of our code was simply copies of each other with slightly different twists on the code in order to get it to work. This go around we abstracted away a lot of our models, templates, and functions to take advantage of code-reuse. For example, we created a pagination.html file that contains the logic for pagination so that we simply need to include {% include pagination.html %} in a file in order to add pagination. Additionally, we simplified our code for calling the database and abstracted it so that all models use the same code and just change the name of the table that they use to access the code.

We feel that we have continued to keep up our strong use of githubs features to keep track of things. Arranging issues by milestones, projects, tags, and so on has easily made it clear what we need to do for each stage and the relative importance of each issue. With the tags we are easily able to see if something is a large or small scale problem and what kind of skill set is needed to tackle the problem. We are well over a hundred issues now and by the end of the project we will have dozens more. Our template that we added in the previous phase has continued to save time allowing us to continue to add features to the site without having to mess around with the css much, allowing us to just focus on the logic of the site.

What did we learn?

We learned the importance of future proofing your code. A lot of this project has been refactoring the code when the requirements change which definitely slows us down a lot. For example, if we had abstracted away everything earlier we wouldn't have had to rewrite things in each model every time we tried to add a feature such as sorting. That especially gets tedious if you made a typo in one of your models and copied and pasted it down the line and then need to go fix every single model.

In the same vein, we learned how important abstraction and modularity are in programming. Being able to reuse code helped speed up production a lot and simply made things easier. We were able to leverage our abstraction in disparate parts of the project by calling methods in our own "library files" that we had set up for things like database access. We worked on our automation skills in order to continue scraping information from our APIs to put into our database. We nearly quintupled our workout database from the last phase and just the thought of having to do that by hand makes our fingers hurt.

What can we do better?

This phase the biggest thing that bit us was having to catch up from the previous phase. Our phase two did not go very well at all and a lot of this phase was just catching up to phase three so that we could do phase three. If we had finished all of our testing earlier we wouldn't have to had worried about the increased workload just the get smaller and smaller quantities of points.

At several points during this project we had team members ask the question of "what should I be doing today?" because there was not a clear roadmap of "this must be done before that" and "this is a higher priority than that." A lot of two people working on the same thing happened where one person's work had to be scrapped wasting time. We should have had a flowchart or something ahead of time so that we could see "oh, I need to work on this" next and we should have some kind of method of saying "hey I'm working on this right now don't touch it" that's easier to see than what we currently have. We considered the "assign an issue" feature on github, but a problem we've had in the past with that is the fact that someone can pick up a mission critical issue, do some work on it, then have to drop it for three or four days to study for a test. It's not clear when someone really is working on it then, or if they mean to work on it but can't get around to it.

What puzzles us?

Similar to the last phase, we're curious how other teams are run and how they're attacking their problems. The whole learning by doing is great but something like a weekly Q & A from one of the groups would be super interesting for the first 10 minutes or so of class so that we could share knowledge. Some groups might have already solved our problems and we might have already solved theirs. We're curious around when the other groups finish the different phases of their projects and how we compare. It'd be very helpful to determine how on track we are if we could compare to other projects without having to read their entire codebase to understand how far along they are in the requirements.

results matching ""

    No results matching ""