Announcement

Collapse
No announcement yet.

Judging Feedback

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Judging Feedback

    I realize that I am beating a dead horse, but the subject of judging feedback still seems to get raised again and again without any satisfactory resolution. I understand that the answer is not simple but there must be a better response than "go do some self-reflection".

    Our team just completed the local regional championship. They enjoyed the event (and appreciated all the work of the volunteers and organizers that made it possible) but ended up going home frustrated and disappointed. (Yes - I know teams go home disappointed, but it is the frustrated bit that I want to explain).

    The team had done well and advanced to a super-regional last year but did not make it to Worlds. This year they agreed together that they would work harder and increased their outreach activities and really improved some sections of the engineering notebook.

    They went to over 20 community events where they promoted FIRST, its programs and its ideals to kids and their parents. Many of these were middle school aged children and they explained the FLL program and provided contact information for the state associate partner for FLL. They volunteered at a couple of local qualifiers since we are too small to host one of our own, and they worked on some community outreach where they could impact the community using skills they had learnt from the robotics program.

    On the competition day, the judging interview seemed to go well and the notebook and control award papers were submitted (since the control award section was one of the areas they had worked hard on). At the awards ceremony the team was very happy to receive the Judges award, but was surprised and confused not to get a single mention in any position of any other award category, despite the extra work that they had put in. Even other teams seemed surprised that we had not been recognized elsewhere.

    The teams frustration comes because they must decide how to approach next year's season. They have no idea if they were focusing on the wrong areas, missed something important, or just need to do more next year.

    If there had been some level of feedback from the judging that had helped the team understand what they need to improve it would have allowed them to focus and plan, but that lack of feedback just leaves a vacuum of confusion. Understandably they don't want to work even harder next year only to get the same result because they missed something.

    FIRST emphasises the design cycle which includes feedback as a key component. When you see the robot fail to do what you expect, you may have to reflect to determine the root cause but at least you have the visual feedback to point you in the right direction.

    If the notebook doesn't show how you used feedback in the design process you won't do well for the Think award.

    FIRST embodies great values, but the current implementation of the judging system does the exact opposite of what FIRST tells the kids to do, and refuses to provide any feedback.


    As I have said previously, I understand that there are issues giving feedback, but I still believe that we miss more opportunities by not giving any feedback.

  • #2

    First off, congratulations to your team. It sounds like are an amazing group of kids!

    I'm usually a mentor, but I've been on the judging side of things as well on all levels of FIRST. Of all the FIRST programs, FTC is the only one that prohibits feedback at all, so I feel your pain. Add to that the subjectiveness of judging (and that many times the judges are not well trained) and you end up having a team that one week doesn't get mentioned for awards and the next week with a different judging panel wins the Inspire Award.

    I always remind the kids that we aren't doing outreach to win awards. Reaching out to people is it's own reward. Sometimes you end up getting recognized for that, and sometimes you don't. The final outcome of the awards is largely out of your control because you don't know if the judges are following the rubric or not, and you don't know how the judges think. Some are more analytical while others might be more story-driven. If the team can recognize which one it is then they may be able to adapt. After a few judging interviews (from any level of FIRST), kids start to figure out how to talk to people and convey what is important to the person asking the questions. They can also tell if a judge does or does not understand FIRST, and they can usually tell if they themselves gave a good or bad interview. I've had teams come out of the room and say "welp, we better do well with the robot today because we're not qualifying on judged awards!"

    The lack of transparency and feedback in judging isn't ideal and is definitely frustrating, but there are a few things you can do to help your team:

    1) talk to the teams who won the awards you think you were eligible for to see what they did and how they presented their message. There isn't any magic to this, but you can always learn something (and they might learn something from you as well). There are very few top-level teams that aren't willing to share.
    2) volunteer to be a judge. There is nothing that can help you understand the judging process and how to help your team more than getting involved in it yourself.
    3) look at the award descriptions carefully. I know this is that "self evaluation", but is your team conveying what they did so that the judges see that they are meeting all the criteria for the award? Discuss this with another team that won an award and get their perspective. Sometimes if two teams are close for an award but one of them is perceived to have missed an important point then that makes the different.

    Sometimes the judges will write a really good awards script that can help you understand why a certain team was picked for an award, and you end up saying "oh THAT's an awesome thing they did!" Other times not so much.

    Having been on both sides, I can see how it would be difficult to write feedback for every team, but it would be nice to have some mechanism for those who want it. The FRC Chairman's Award interview has a feedback form that you can leave with judges for this purpose, and they are suppose to fill it out if asked. I almost feel like the FTC awards are more like FLL in the sense that a rubric with a rating system for the different categories would be better.

    At the Tulsa offseason event a couple of weeks ago, each team was asked to give their judging presentation in front of everyone else at the event (I think their were a dozen teams) so they could learn from each other. Something like that might be a good start to gain feedback on your presentation . Again, I know that's not exactly what you are looking for, but I don't anticipate that the practice will change anytime soon.

    Anyway, I wish you the best, and congratulations again to you and your team. I hope they can work through the feelings of frustration. Keep doing good things and your day will come. I've see that happen over and over again.
    FTC 4962 / 3638
    FLL 11 / 21 / 9293

    Comment


    • #3
      One suggestion is that state affiliate partners ask FTC award winning teams to publish an executive summary of their award section if not their entire engineering notebook. This would take the mystery out of the award winners and not rely on the 1 to 2 sentence phrasing of the award-winning description.

      See also The Enhanced judging rubric developed for the FTC San Diego Region "The goal of this rubric is to provide additional guidance to the teams and ensure a consistent judging process."

      Here is a great description on the value of enhance rubrics and how they help the learning process from UC Berkeley Division of Undergraduate Education Center for Teaching & Learning

      Comment


      • #4
        BSV I really appreciate the time and detail you put into your reply, and thank you both ( BSV and FTC13259 ) for your suggestions. I think that they can provide valuable improvements to the process:

        - The judging rubric would be a great way to help the judges and the teams share a common set of expectations and standards.
        - Having "best-in-class" examples of the notebook sections seems like an excellent way to illustrate to teams what is possible, and inspire them to reach higher.
        - We don't have an off-season event, and it may be too far for some teams, so I wonder if we could invite "best-in-class" judging session teams to share videos of their presentations.

        I plan to propose all three for next year as ways to enhance the judging process for our region.

        I agree that a well written award script can recognize the aspects at which a team excelled, but that doesn't always help the team that believes they too had a good case, but did not get recognized.

        I have heard people justifying this process by saying that it is like a job interview, but I disagree. I think it would be better to see it as an annual performance review. After all, each team is invested in the organization in terms of time, talent and money, as well as paying their dues (literally) and so may be better thought of as an employee or team member rather than a candidate walking in off the street.

        In performance reviews we use a leveling guide (similar in concept to the enhanced judging rubric) to recognize the best, but also to build up the rest of the organization. This requires feedback to the employee to recognize what is being done well, and also to identify the areas in need of improvement. Giving feedback is not always easy but it builds better employees and a better organization.

        I would love to see a quick and simple feedback form, with a total of maybe a dozen items, that covers the different aspects of the judging interview, notebook, pits and pit interview, with a 1 to 5 scale for each. A small text area for optional more detailed comments would be useful. This does not eliminate the self-reflection, that I think is useful, but now allows teams to reflect on the areas where the judges assessment differed (positively or negatively) from their own, and how that might have been influenced by the judging interview and the notebook presentation.

        Section 10.3.1.1 of Game Manual Part I explains that this policy is to encourage self-evaluation. I agree that this is an important skill but I still believe that a total lack of feedback can make this more blind guesswork than a useful skill building exercise. I was not aware as BSV pointed out that this was the only FIRST program to adopt this rule.

        In the feedback that FTC solicited last year I voiced my frustration with the no feedback rule, as did some other teams that I talked to, but the feedback disappeared into a black hole. I started this thread in the forums to try to start a discussion on the subject in the hope that we can do what we tell the kids they should do, by modelling working together and coming up with a better solution.

        Comment


        • #5
          I think that BSV has some great recommendations.

          Keep hammering on the feedback forms that FIRST sends out that this is a big issue and needs to be addressed. Tell your local partner to push FIRST to allow feedback for FTC too. There are some partners that are afraid of dealing with irate teams/coaches/parents if they get feedback that they don't agree with. I'd guess that those partners have had a bad experience in that realm. However, from what I've seen the partners will get some upset people bugging them no matter what. At least by having feedback from judges there is something that the team can use to improve their skills.

          As for the argument that you don't get feedback from a job interview - that's true but only sometimes. I have often had a recruiter get me information about what did or didn't go well with an interview. And if it wasn't the recruiter it was a friend that worked at the company that got me some feedback.

          Having judged FLL too (and writing feedback) and having judged in 4-H fairs (and writing feedback) it is hard sometimes to convey what you want to tell them. There are time constraints such that you can't write as much as you'd like to. Or it's something subtle and hard to convey. So getting written feedback from the judging process won't solve all your issues - but it will make the process A LOT better. There are times I wish I could tell a team - go fix this one thing.

          Until FIRST changes this (backward) policy here's some suggestions (much like BSV's)

          1) Have adults volunteer to be judges at other tournaments. It is an eye opening experience. You should not be discussing specific things about what goes on in the judging room - but the experience will give you general ideas of things that your team needs to do. It also will give you personal connections to other team's coaches & mentors. Collect those email addresses so you can set up other things with them later.

          2) Your presentation is not your final sales pitch. It is a short summary of what you've done that entices the judges to come take a second look at your team in a callback. Just like a resume doesn't get you the job - it gets you the interview. The presentation doesn't get you the award it gets you the callback.

          3) Make stuff easy to find in your notebook. Judges don't have more than a few minutes to look at your notebook. If you are shooting for connect or motivate - have a section labeled outreach. In that have sub-sections for outreach to engineering professionals and for outreach to other FIRST teams. At the beginning of that section have a summary page that lists all your events and how many hours your team spent working on them.

          4) Get other's to review your notebook. We had a technical writer who gave us a lot of good feedback. Also a marketing professional. Find the Think award winner for your region (and the finalists) and ask them to review your notebook.

          5) Get feedback on your presentation. It could be from presenting to a sponsor and at the end ask them what they think you could do better. It could be inviting other teams to watch your presentation and give you feedback. Or set up a practice time where you arrange for a few judges and then invite multiple teams. It gives you and other teams feedback AND is an outreach event all in one. (see above about making connections to those other coaches/mentors by volunteering to judge)

          If travel for such an event is a problem - you can do stuff thru skype too. It's not as nice as face-to-face - but it can help a lot too. Even for the notebook - make it into a PDF (scan it if necessary) and then you can put it on google drive and share with someone. A physical notebook is better because that has things like the cover and tabs, but the electronic will do.

          My team attended an event set up by another team where they had 4 other world-class teams skype in and each presented for I think about 30 minutes about some topic. These top performing teams are usually more than happy to help other teams.

          For notebooks - having some good examples is great. However don't feel that you have to be exactly like another team just because they won Think at an event. I've seen lots of great notebooks that are very different from one another. If teams are willing to share with you that is great - use that to get ideas and then build your own notebook.

          All that being said - judging is still sometimes a subjective process. One team member can make a joke that a particular judge finds as cutting rather than humorous. You probably won't find that in the feedback but it can shoot down your chance for that award. On the other hand you cut out all humor and the judge finds your team as "flat". You don't know which way it will go so you just do the best you can and when things don't go your way take it as a learning experience that sometimes you don't win. However, with feedback I think it would be a better experience for everyone.

          Comment


          • #6
            Originally posted by FTC13259 View Post
            See also The Enhanced judging rubric developed for the FTC San Diego Region "The goal of this rubric is to provide additional guidance to the teams and ensure a consistent judging process."
            FTC13259 this is quite good. Why aren't we using that everywhere?

            I have seen an engineering notebook rubric from another region as well, but it isn't given to teams as feedback (although it was once or twice in the past and people appreciated it very much).
            FTC 4962 / 3638
            FLL 11 / 21 / 9293

            Comment


            • #7
              Nicks If you want some advice as to how you can improve, send us an email at [email protected]. We try to help as many teams as we can improve and we can help to point you in the right direction to try to win some awards. We won the inspire award at the world championship last year and now, our goal is to get other teams at that level so we'd love to help you.

              Comment


              • #8
                RollerCoaster45 Thanks - We always appreciate any help and guidance we can get. Cooperation between teams is one of the best values in FTC and helps turn it from just being a competition to being a community where everyone can benefit and grow.

                I will send you a link to the electronic copy of our notebook and look forward to your suggestions and feedback.

                Comment


                • #9
                  DanOelkeFTA Thanks - Since FIRST is sending out feedback requests (ironically) hopefully people can bring up the lack of any judging feedback as an issue that needs to be readdressed.

                  I appreciate all of the suggestions, ideas and materials that were shared here. I will propose some changes, especially the enhanced rubric, be adopted here in our region. Our team can work on some of the other suggestions next season like participating in the judging process.

                  All of these ideas and suggestions provide improvements and are valuable, but still don't address my core frustration of lack of any judging feedback, which I still believe is the antithesis of everything else that FIRST stands for. I was interested that no one chimed in to defend this process, other than to possibly avoid confrontations with irate parents / coaches (which seems to already be covered under GP conduct anyway).

                  Comment


                  • #10
                    people can bring up the lack of any judging feedback as [survey] issue
                    Did that on the survey.

                    Comment


                    • #11
                      This blog post from JoAnn says that FTC is listening to these concerns and asking for feedback. Very happy to see this, and I hope we can get some good proposals out of it!

                      http://firsttechchallenge.blogspot.c...ck-update.html

                      FTC 4962 / 3638
                      FLL 11 / 21 / 9293

                      Comment


                      • #12
                        The enhanced rubrics seem well thought out, but it wouldn't be fair to have judging rubrics that aren't shared with the teams in advance. Also, this particular set effectively modifies the official award descriptions in some ways. Of course, having unknown, unofficial rubrics in each judges heads is even worse (the current situation).
                        I would advocate for adding more detail to the award descriptions or have them reference some kind of document like this.
                        What should be avoided, however, is some kind of points system that removes subjectivity. The rubrics seem mostly helpful for close judged situations.

                        Comment

                        Working...
                        X