And Lets Begin:
What comes next in the sequence? 3, 9, 27, 81, …
"Brains & Breaks: Summer Quiz"
Quiz
•
Other
•
University
•
Hard
Tumaini K
Used 7+ times
FREE Resource
9 questions
Show all answers
1.
MULTIPLE CHOICE QUESTION
20 sec • 2 pts
And Lets Begin:
What comes next in the sequence? 3, 9, 27, 81, …
243
162
108
324
2.
MULTIPLE CHOICE QUESTION
20 sec • 2 pts
Bolivia
Pap Guinea
Indonesia
Nigeria
Answer explanation
The Pacific Island nation of Papua New Guinea has three official languages – English, Tok Pisin, and Hiri Motu. In addition to these three official languages, Papua New Guinea is home to about 800 + unique languages making it the most linguistically diverse country in the world.
3.
MULTIPLE CHOICE QUESTION
20 sec • 2 pts
Which 2023 film included a major AI ethics subplot, sparking global debates?
The Creator
Oppenheimer
Mission: Impossible – Dead Reckoning
Everything Everywhere All At Once
Answer explanation
The Creator (2023) is a sci-fi film centered entirely on AI ethics.
It’s about a future war between humans and AI, exploring:
-Whether AI can be sentient
- Moral rights of humanoid machines
-Fear of “the other” and techno-human coexistence
The movie explicitly mirrors real-world ethical questions about AI autonomy, empathy, and warfare.
4.
MULTIPLE CHOICE QUESTION
30 sec • 2 pts
Which of the following cities is located farthest from any ocean coast?
Ürümqi, China
Novosibirsk, Russia
Astana (Nur-Sultan), Kazakhstan
Lhasa, Tibet
Answer explanation
Ürümqi, in western China, is officially recognized as the largest city farthest from any sea/ocean, located about 2,500 km (1,550 miles) from the nearest coastline.
This location lies near what’s called the Eurasian Pole of Inaccessibility — the point on land most distant from any ocean.
Fun Fact: Ürümqi is often called the world’s most inland major city! 🌏
5.
MULTIPLE CHOICE QUESTION
30 sec • 2 pts
True or False: Explainable AI (XAI) always increases user trust in AI systems?
Answer explanation
False.
While Explainable AI (XAI) can increase user trust, it is not a guarantee. XAI can also lead to decreased trust if the explanations are not well-understood, clear, or considered satisfactory by the user.
6.
MULTIPLE CHOICE QUESTION
20 sec • 2 pts
Which of these is NOT a typical pillar of AI Trustworthiness?
Robustness
Interpretability
Scalability
Fairness
Answer explanation
Why Scalability is not considered a pillar of Trustworthiness:
While scalability (the ability of an AI system to handle large volumes of data or complex tasks) is important in practical applications, it is not a fundamental principle for establishing trust in the system. A trustworthy AI system can be scalable and robust, interpretable, and fair.
Trustworthiness focuses on the ethical and responsible development and use of AI, whereas scalability primarily concerns technical efficiency.
7.
MULTIPLE CHOICE QUESTION
30 sec • 2 pts
A highly transparent AI system still faces distrust from users due to past failures. Which cognitive bias is most likely at play?
Recency effect
Confirmation bias
Negativity bias
Illusory correlation
Answer explanation
While the other biases could play minor roles, Negativity bias is most likely at play here.
Negativity bias is the tendency for people to give more weight to negative experiences or information than positive ones. In this case, even though the AI system is now highly transparent (a positive trait), users are still influenced by past failures (negative experiences), leading to ongoing distrust.
8.
MULTIPLE CHOICE QUESTION
30 sec • 2 pts
Which of the following is a purpose‑built framework providing comprehensive ethical guidelines for the design and implementation of AI systems?
GDPR
NIST Risk Management Framework
EU AI ACT
IEEE Ethically Aligned Design
ACM Code of Ethics
Answer explanation
IEEE Ethically Aligned Design - This framework is explicitly focused on ethical design principles for autonomous and intelligent systems, providing detailed guidance on embedding ethical considerations throughout AI development
9.
MULTIPLE CHOICE QUESTION
30 sec • 4 pts
The "Chinese Room" thought experiment was proposed to challenge:
Machine translation accuracy
The Turing Test
Strong AI and machine understanding
Computer processing speed
Answer explanation
The Chinese Room thought experiment, proposed by philosopher John Searle in 1980, was intended to challenge the idea that a computer program can achieve genuine understanding or consciousness simply by manipulating symbols according to formal rules. Specifically, it argues against the notion that syntax (the formal manipulation of symbols) is sufficient for semantics (meaning or understanding).
10 questions
Introduction to Engineering and Problem Solving
Quiz
•
University
11 questions
Insecure
Quiz
•
University
10 questions
Summative Assessment
Quiz
•
University
9 questions
Evidence Based Medicine Quiz
Quiz
•
University
13 questions
Ethics
Quiz
•
University
8 questions
Citizenship
Quiz
•
University
14 questions
Medical Minds Interest Meeting
Quiz
•
University
9 questions
Aristotle’s Virtue Ethics
Quiz
•
University
15 questions
Multiplication Facts
Quiz
•
4th Grade
25 questions
SS Combined Advisory Quiz
Quiz
•
6th - 8th Grade
40 questions
Week 4 Student In Class Practice Set
Quiz
•
9th - 12th Grade
40 questions
SOL: ILE DNA Tech, Gen, Evol 2025
Quiz
•
9th - 12th Grade
20 questions
NC Universities (R2H)
Quiz
•
9th - 12th Grade
15 questions
June Review Quiz
Quiz
•
Professional Development
20 questions
Congruent and Similar Triangles
Quiz
•
8th Grade
25 questions
Triangle Inequalities
Quiz
•
10th - 12th Grade