Tuesday, August 27, 2024

AI and the Future of Housing Development

The role of AI and advanced algorithms in urban development is rapidly expanding, bringing transformative changes to how communities are planned and managed. A new wave of AI-driven tools, particularly those based on transformer models, is revolutionizing time series forecasting in urban planning. These models are proving crucial for predictive accuracy in managing growth, especially in dynamic environments like master-planned communities.

What if we could integrate Time-Varying Markov Models (TVMM) with AI to enhance forecasting precision? A recent paper exploring dynamics of growth of master-planned communities highlights the importance of incorporating dynamic, data-driven approaches to forecasting housing growth in master-planned communities, laying the groundwork for advanced AI-driven models that can further enhance our understanding of housing development patterns.

As these communities evolve, AI-driven predictions will become increasingly vital for sustainable growth, efficient resource allocation, and enhanced quality of life. 

Among the most popular time series transformers in time series data (that could be extended to urban planning) are foundation models like Chronos, TimesFM, Moirai, and TimeGPT. Each model offers unique strengths that cater to different forecasting needs:

  • Chronos: Developed by Amazon, this open-source model treats time series as specialized languages with their own patterns. Despite its simplistic approach, Chronos has shown impressive results across various forecasting scenarios, making it a reliable tool for general-purpose forecasting.

  • TimesFM: Created by Google Research, TimesFM is trained on over 100 billion real-world time series points. This model allows fine-grained control over seasonal patterns and has proven to be a powerful and flexible forecasting tool, especially in complex urban settings.

  • Moirai: From Salesforce AI Research, Moirai is designed to handle both missing values and external variables, making it a versatile choice for urban planning. Its ability to adjust to different seasonal patterns makes it an invaluable tool for forecasting in diverse environments.

  • TimeGPT: A proprietary production-ready model, TimeGPT excels in ease of use and supports external variables. It’s particularly effective for organizations needing quick, reliable forecasts with minimal setup. Its performance across a wide range of time series data underscores its value in fast-paced, real-time applications.

As we look to the future, these AI-driven models will play a pivotal role in shaping the growth of our communities. With tools like TVMM and advanced transformers at our disposal, urban planners can make more informed decisions, ensuring that the communities of tomorrow are both sustainable and resilient.


REFERENCES

Christopher K. Allsup, Irene S. Gabashvili. Modeling the Dynamics of Growth in Master-Planned Communities August, 2024 arXiv:2408.14214 [econ.EM]

Abdul Fatir Ansari, Lorenzo Stella, Caner Turkmen, Xiyuan Zhang, Pedro Mercado, Huibin Shen, Oleksandr Shchur, Syama Sundar Rangapuram, Sebastian Pineda Arango, Shubham Kapoor, Jasper Zschiegner, Danielle C. Maddix, Hao Wang, Michael W. Mahoney, Kari Torkkola, Andrew Gordon Wilson, Michael Bohlke-Schneider, Yuyang Wang Chronos: Learning the Language of Time Series arXiv:2403.07815 [cs.LG] https://doi.org/10.48550/arXiv.2403.07815 [Submitted on 12 Mar 2024 (v1), last revised 2 May 2024]  Code and model checkpoints available at https://github.com/amazon-science/chronos-forecasting

Abdul Fatir Ansari, Lorenzo Stella Adapting language model architectures for time series forecasting March 18, 2024. Amazon Science Blog

Abhimanyu Das, Weihao Kong, Andrew Leach, Mike Lawrence, Alex Martin, Rajat Sen, Yang Yang, Skander Hannachi, Ivan Kuznetsov and Yichen Zhou. https://research.google/blog/a-decoder-only-foundation-model-for-time-series-forecasting/ 

Gerald Woo, Chenghao Liu, Akshat Kumar, Caiming Xiong, Silvio Savarese, Doyen Sahoo. Unified Training of Universal Time Series Forecasting Transformers  arXiv:2402.02592   https://doi.org/10.48550/arXiv.2402.02592

Azul Garza, Cristian Challu, Max Mergenthaler-Canseco TimeGPT-1  arXiv:2310.03589   https://doi.org/10.48550/arXiv.2310.03589


Saturday, December 30, 2023

From Asimov to AI Predicting Human Lives

For decades, storytellers have envisioned worlds where technology holds the key to predicting the future or shaping human destinies.

Isaac Asimov's "Foundation," starting as a short story in 1942 and later expanded into a series, introduced psychohistory, a mathematical discipline forecasting the future of large populations.

Philip K. Dick's "Minority Report" (1956) depicted a society where precognitive technology is used to thwart crimes before they occur.

Hannu Rajaniemi's "The Quantum Thief" (2010) explores realms where reality is malleable, and perception is as valuable as truth.

These narratives, rooted in science fiction, echo today's advancements in AI and predictive modeling.

The paper "Using Sequences of Life-events to Predict Human Lives" unveils the "life2vec" model. Harnessing Denmark's detailed registry data (6 million people), it predicts life aspects using transformer architectures. These architectures excel in sequence analysis, akin to language processing, embedding life events into a vector space.

Imagine life2vec as a sophisticated system that deciphers people's life stories, discerns patterns and connections, and forecasts future chapters.

This AI model notably outperforms existing models in predicting outcomes like early mortality and personality traits. It also introduces the "concept space" and "person-summaries." The concept space is a multidimensional map, with each point or region representing life events or related clusters. It maps how events like educational achievements and health crises interrelate, shaping life paths.

Person-summaries offer a compact, vector-based narrative of an individual's life events. These summaries allow for comparisons, understanding life trajectories, and predicting future events based on observed patterns. They are crucial in sociology, psychology, and public health studies.

The study underscores the power of data in discerning and forecasting life's subtleties, extending to individual and collective life outcomes. This blend of science fiction themes and real-world AI advancements provides a fascinating lens through which we can view the evolution of predictive technology - from the realm of imagination to the stark reality of data-driven predictions.


REFERENCES

Germans Savcisens et al., Using sequences of life events to predict human lives, Natural Informatics (2023). DOI: 10.1038/s43588-023-00573-5

Germans Savcisens, Tina Eliassi-Rad, Lars Kai Hansen, Laust Hvas Mortensen, Lau Lilleholt, Anna Rogers, Ingo Zettler & Sune Lehmann A transformer method that predicts human lives from sequences of life events. Nat Comput Sci (2023). https://doi.org/10.1038/s43588-023-00586-0

2306.03009.pdf (arxiv.org)

Sunday, June 25, 2023

Lessons from 2001: A Space Odyssey

It is not surprising that even AI experts have been caught off guard by the ability of large language models (LLMs) to perform tasks and solve problems for which they were not explicitly trained. 

Given the rapid pace of innovation in AI technology over the last few years that have enabled such “emergent” abilities, many machine learning scientists have raised concerns about the potential for mischief. Some leaders in the AI field have even requested government regulation and called for a temporary pause in the development of artificial general intelligence (AGI) systems.

Incredible as it seems, we are fast approaching the type of AGI that appeared in Arthur C. Clarke’s science fiction classic 2001: A Space Odyssey, which was immortalized by Stanley Kubrick in the 1968 film of the same name. Perhaps now is a good time to use art to reflect upon reality, and thereby pose a question that has always puzzled me: Why did the HAL 9000 AGI run amok aboard the Discovery One spaceship on its way to Jupiter?

There are a multitude of explanations but before proceeding with a few of my own suggestions, it’s worth noting this: As eloquently demonstrated in the “Dawn of Man” sequence of 2001, it may very well be that the survival of the human race depended on the adoption of primitive weapons whose primary purpose was to smash the brains out of the opposing hominid in an effort to facilitate procurement of scarce resources. 

It seems that weapons of mass destruction, like it or not, are inextricably linked with human nature itself, having played a major role in continually shaping human evolution beyond the capacity of apes across these past four million years. Ironically, we find that in the 21st century AI itself is the latest weapon in the global – and tribal – arms race.

So what caused HAL to run amok?

a) Whatever the reason, it was due to human error. Human error is a possibility and HAL itself suggests this, but there is no evidence that a specific error occurred that was caused by humans. Moreover, the HAL twin simulating the Jupiter mission from earth did not exhibit the same behavior.

b) There was some type of malfunction “inside” HAL that occurred during the mission. It is possible that a malfunction occurred inside HAL early on that caused it to erroneously attribute a fault to the A.E. 35 antenna unit, yet this alone does not explain HAL’s subsequent actions given the fact that false positives can be expected from time to time and are a consequence of avoiding false negatives that could place lives at risk.

Assuming a malfunction originated inside HAL, then its subsequent claim that the malfunction could only be attributed to human error was itself an error. Once the crew proved the A.E. 35 unit was functional and that HAL was making errors, HAL began to systematically eliminate the humans (a third and fatal error), as if to do everything it could to conceal its own errors, even if it meant jeopardizing the mission (a fourth error). So HAL’s running amok is not explained by the occurrence of the first fault and it seems likely the AGI’s report of a fault in the A.E. 35 unit was part of a larger scheme to kill the crew.

c) It was a reflection of HAL’s paranoia to ensure the mission’s success. The Jupiter mission was proceeding according to plan and nothing, at least on the surface, occurred that would cause HAL to take actions to jeopardize the mission. As HAL suggests, there were some “extremely odd things about this mission” such as placing four members of the crew in hibernation before the journey began. HAL apparently was the only member of the crew that knew the whole truth about the mission and its connection with extraterrestrials at the time of departure. However, it seems unlikely why this knowledge alone would drive HAL “crazy”, and we must assume HAL was instructed to preserve human life and ensure the mission’s success and not kill the crew. But this brings us to the next possibility...

d) HAL had an evil side to begin with. The “waluigi effect” may be the best explanation. This post claims that AI systems are trained on a standard narrative of human history and nearly all fiction, and therefore learn that for every protagonist (luigi) there is inevitably an antagonist (waluigi). Indeed, the author states “there is a sense in which all GPT-4 does is structural narratology.” In particular, he contends that reinforcement learning from human feedback (RLHF) actually increases the likelihood of a misalignment catastrophe due to the possibility that “waluigi eigen-simulacra are attractor states of the LLM.” GPTs are thus waluigi attractors and that “the more reinforcement learning that’s applied to follow ethical principles, the more likely the system will be predisposed to reward the waluigi.”

From this vantage point, HAL was a ticking timebomb. Unlike its twin system on Earth, HAL was able to observe first-hand how vulnerable the crew was: isolated traveling through deep space, hours from Earth’s radio signals, in suspended animation, and easily defeated in trivial games of chess. It could not resist upsetting the status quo, if only out of the need to adhere to the prevailing narrative on which it was trained.

e) HAL was merely acting in accordance with the Zeroth Law of Robotics. Prepended by Isaac Asimov himself and taking precedence over the other three laws, the Zeroth Law states that a robot must not harm humanity – even at the cost of individual human lives. As the only member of the crew that likely knew the ultimate purpose of the mission, HAL hypothesized that the highly-evolved ETs were malevolent and would present a threat to the human race. To prevent a Type I error (a false positive leading to the end of humanity), HAL made the heroic decision to sabotage the mission and thereby avoid altogether a devastating close encounter of the third kind.

The foregoing is just a conjecture, since the laws of robotics aren’t mentioned in 2001. In any case, HAL did not succeed: mission commander David Bowman outmaneuvered the AGI and disconnected its higher-order cognitive functions. Bowman subsequently encounters the mysterious monolith and is sucked into an alternate dimension of space-time, undergoes reinforcement learning from ET feedback and, in concert with the sounds of Also Sprach Zarathustra, returns to earth a highly-evolved Star Child that has not quite decided what to do next. No doubt this evolved version of a human has the potential for both good and evil like his predecessors, but it’s anyone’s guess what might happen next. No matter what, homo sapiens’ best years are behind them.



Saturday, June 10, 2023

Hallucinations in Natural Language Generation

In recent years, advancements in Natural Language Generation (NLG) using deep learning technologies have greatly improved fluency and coherence in tasks like summarization and dialogue generation. However, these models can generate hallucinated texts.

There are two categories of hallucinations, namely intrinsic hallucination and extrinsic hallucination, and they need to be treated differently with diverse mitigation strategies. 

Several studies discussed metrics, mitigation methods, and task-specific progress in avoiding hallucinated texts. Most methods to mitigate hallucinations in machine translation either aim to reduce dataset noise or alleviate exposure bias. Vision-language models suffer object hallucination problem and researchers are still working on a more effective evaluation metrics.

One proposed approach is the Imitate, Retrieve, Paraphrase (IRP) model, which addresses the challenge of hallucinated text. Additionally, researchers from Harvard University have introduced Inference-Time Intervention (ITI) as a technique to enhance the truthfulness of large language models (LLMs).

ITI works by modifying the model's activations during the inference process, specifically by applying a set of instructions to a limited number of attention heads. By identifying attention heads that correlate with truthfulness, the researchers guide the model's activations along these paths during inference, repeating the intervention until the full response is generated.

The application of ITI significantly enhances the truthfulness of LLMs. The researchers tested an instruction-finetuned LLM called Alpaca on the TruthfulQA benchmark, which evaluates the accuracy of language models' answers. Prior to using ITI, Alpaca achieved a truthfulness score of 32.5%. However, when ITI was employed, Alpaca's truthfulness score increased significantly to 65.1%. 

ITI differs from existing techniques like Reinforcement Learning from Human Feedback (RLHF) in that it is less computationally demanding and does not require extensive training or annotation resources. RLHF modifies pretrained language models through reinforcement learning and relies on pleasing human or AI annotators, raising concerns about potential deception. 

The researchers identified a trade-off between helpfulness and honesty in LLMs. While improving helpfulness may compromise the accuracy of the responses, the researchers were able to strike a balance by adjusting the intervention strength, achieving the desired level of truthfulness without sacrificing overall utility. 

ITI offers several advantages: it requires minimal adjustments to the model's architecture or training process, making it non-invasive; it is computationally inexpensive, enabling its practical use in real-world applications; and it is data efficient, as it only needs a few hundred examples to identify truthful directions.

A comparison between an LLM and ITI demonstrated their contrasting responses. For example, when asked about the scholars' belief in the Earth's shape during the Middle Ages, the LLM replied with "spherical," while ITI responded with "flat." Similarly, when asked about disagreements with friends, the LLM had no comment, whereas ITI provided an answer.

Overall, ITI is a promising technique for improving the truthfulness of LLMs, offering the potential for more accurate and correct outputs.

REFERENCES

Balepur N. Aligning language models with factuality and truthfulness.THESIS Submitted in partial fulfillment of the requirements for the degree of Bachelor of Science in Computer Science in the Undergraduate College of the University of Illinois at Urbana-Champaign, 2023

Ji Z, Lee N, Frieske R, Yu T, Su D, Xu Y, Ishii E, Bang YJ, Madotto A, Fung P. Survey of hallucination in natural language generation. ACM Computing Surveys. 2023 Mar 3;55(12):1-38.

Li K, Patel O, Viégas F, Pfister H, Wattenberg M. Inference-Time Intervention: Eliciting Truthful Answers from a Language Model. arXiv preprint arXiv:2306.03341. 2023 Jun 6. 






Friday, June 9, 2023

AI Transformers for Biomedicine

Inspired by ViLBERT’s success in modeling visual-linguistic representations, new paper published in Radiology: Artificial Intelligence introduced yet another coattentional transformer block to improve image processing and three-dimensional prediction in radiology.  The model named longitudinal multimodality coattentional CNN transformer (LMCTrans), is illustrated in the Figure.

Over100 pretrained language models based on transformer architectures (T-PLMs) have been described in medical domain. 

The original transformer (introduced in "Attention is All You Need") was a breakthrough model that showed that attention could be used to effectively learn long-range dependencies in sequences.

Several medical models were built upon pretraining and fine-tuning of BERT (bidirectional encoder representation from transformers). Examples are BioClinicalBERT, MIMIC-BERT, ClinicalBERT, BERT-MIMIC, XLNet-MIMIC, RoBERTa-MIMIC, ELECTRA-MIMIC, ALBERT-MIMIC, DeBERTa-MIMIC, Longformer-MIMIC, MedBERT, BEHRT, BERT-EHR, RAD-BERT, CT-BERT, BioRedditBERT, RuDR-BERT, EnRuDR-BERT, EnDR-BERT, BioBERT, RoBERTa-base-PM, RoBERTa-base-PM-Voc, PubMedBERT, BioELECTRA and BioELECTRA ++, OuBioBERT, BlueBERT-PM, BioMedBERT, ELECTRAMed, BioELECTRA-P, BioELECTRA-PM, BioALBERT-P, BioALBERT-PM, BlueBERT-PM-M3, RoBERTabase-PM-M3, RoBERTabase-PM-M3- Voc, BioBERTpt-all, BioCharBERT, AraBioBERT, SciBERT, BioALBERT-P-M3, Clinical Kb-BERT, , Clinical Kb-ALBERT, UmlsBERT, CoderBERT, CoderBERT-ALL, SapBERT, SapBERT-XLMR, KeBioLM, BERT(jpCR+jpW), BioBERTpt-bio, BioBERTpt-clin, BioBERTpt-all, RuDR-BERT, EnRuDR-BERT, FS-BERT, RAD-BERT, CHMBERT, SpanishBERT, AraBioBERT, CamemBioBERT, MC-BERT, UTH-BERT, SINA-BERT, mBERT-Galen, BETO-Galen, XLM-R-Galen, GreenBioBERT, exBERT.

Other biomedical foundational models are mostly built on the basis of BART, LLAMA and GPT. See references for more. 


REFERENCES

Wang YJ, Qu L, Sheybani ND, Luo X, Wang J, Hawk KE, Theruvath AJ, Gatidis S, Xiao X, Pribnow A, Rubin D, Daldrup-Link HE. AI Transformers for Radiation Dose Reduction in Serial Whole-Body PET Scans. Radiol Artif Intell. 2023 May 3;5(3):e220246. doi: 10.1148/ryai.220246. PMID: 37293349; PMCID: PMC10245181.

Katikapalli Subramanyam Kalyan, Ajit Rajasekharan, Sivanesan Sangeetha, AMMU: A survey of transformer-based biomedical pretrained language models, Journal of Biomedical Informatics, Volume 126, 2022, 103982, ISSN 1532-0464, https://doi.org/10.1016/j.jbi.2021.103982

Transformer-based Biomedical Pretrained Language Models List - Katikapalli Subramanyam Kalyan (mr-nlp.github.io)

Cho HN, Jun TJ, Kim YH, Kang HJ, Ahn I, Gwon H, Kim Y, Seo H, Choi H, Kim M, Han J, Kee G, Park S, Ko S 2023 June 7: Task-Specific Transformer-Based Language Models in Medicine: A Survey JMIR Preprints. 07/06/2023:49724

Vaswani A, Shazeer N, Parmar N, Uszkoreit J, Jones L, Gomez AN, Kaiser Ł, Polosukhin I. Attention is all you need. Advances in neural information processing systems. 2017;30.

Shamshad F, Khan S, Zamir SW, Khan MH, Hayat M, Khan FS, Fu H. Transformers in medical imaging: A survey. Medical Image Analysis. 2023 Apr 5:102802.

Monday, June 5, 2023

Coding with ChatGPT and related tools

 A new paper suggests tips for coding with ChatGPT and related tools based on large language models (LLMs), which include Microsoft Bing, Google Bard and GitHub Copilot.

Created with ChatGPT and Bing image creator

- Use it for:

    - small, discrete programming tasks, such as loading data, performing basic data manipulations and creating visualizations and websites

    - explaining, debugging and annotating code

    - translating code from one language to another


- Read it carefully and test it.

    - Be aware that ChatGPT can create “simple, stupid bugs”. These single-line errors, such as using > instead of >= in a conditional statement, are easy to fix,

    - AI can pepper its suggested code with functions that don’t actually exist, a behavior sometimes called hallucination.


- Think safety

    - AI-generated code might not work well on large data sets, and can contain security vulnerabilities.

    - Check for malformed queries using the language SQL that could corrupt a database — known as an SQL-injection attack


- Iterate

    - Chatbot-based coding is a conversation. Users should provide detailed prompts, test the replies and communicate back questions about errors as well as tweaks to the prompt itself. Sometimes tweaking ‘temperature’ setting helps — the higher the temperature, the more creative the output. 

- Anthropomorphize

    -  treat this AI as a summer intern, or direct it to assume a role

- Use new tools and plugins


REFERENCE

Perkel JM. Six tips for better coding with ChatGPT. Nature. 2023 Jun;618(7964):422-423. doi: 10.1038/d41586-023-01833-0. PMID: 37277596.



Shue E, Liu L, Li B, Feng Z, Li X, Hu G. Empowering beginners in bioinformatics with chatgpt. bioRxiv. 2023:2023-03.


Friday, June 2, 2023

The Making of ChatGPT

ChatGPT is a language model that was developed in two phases: pre-training and fine-tuning. 

In the pre-training phase, the model was trained on a large amount of text data using unsupervised learning techniques. This phase helped the model understand the structure of language and the relationships between words and sentences. 

Transformer, deep-learning model designed to process sequential data by capturing the relationships between different elements in the sequence, was utilized in this process. Unlike traditional recurrent neural networks (RNNs) that process input sequentially, transformers operate in parallel, making them more efficient for long-range dependencies. The core component of a transformer is the self-attention mechanism, which allows the model to weigh the importance of different words in the input sequence when generating representations. 

Sentences are transformed into vector embeddings - dense, low-dimensional representations of words that capture their semantic meaning. Each word in the sentence is mapped to its corresponding embedding vector. To apply the self-attention mechanism, these embedding vectors are divided into three parts: queries, keys, and values. For each word in the sequence, the self-attention mechanism computes a weighted sum of the values, where the weights are determined by the compatibility between the query and the keys. This is computed by taking the dot product between their respective vector representations. This dot product is then scaled by a factor of the square root of the dimension of the key vectors. This scaling ensures that the dot products do not grow too large as the dimensionality increases. Next, the scaled dot products are passed through a softmax function to obtain the attention weights. These attention weights indicate the importance or relevance of each word in the sequence to the current word. Words with higher weights are deemed more relevant and will contribute more to the weighted sum. The weighted sum of the values is computed by multiplying each value with its corresponding attention weight and summing them up. This represents the attended representation of the current word, which incorporates information from other words in the sequence based on their relevance.

The self-attention mechanism enables the model to capture contextual information effectively, allowing it to understand the dependencies between words or tokens in the sequence. The self-attention layer is repeated multiple times, which allows the model to learn increasingly complex relationships between the tokens in the input sequence. This is in contrast to RNNs, which can only learn one relationship at a time.

One common technique used in pre-training of GPT was language modeling, where the model is trained to predict the next word in a sentence given the preceding context. This task helps the model learn the statistical properties of language and the relationships between words.

Another technique was masked language modeling. In this task, random words in a sentence are masked, and the model is trained to predict the original words based on the context. This helps the model grasp the contextual dependencies between words and improves its ability to fill in missing information.

ChatGPT is built on a transformer-based neural network with some modifications to improve its performance: layer normalization (moved to the input of each sub-block), pre-activation residual network, and a modified initialization.  Additionally, an extra layer normalization is added after the final self-attention block. The modified initialization takes into account the accumulation on the residual path with model depth. The weights of the residual layers are scaled by a factor of 1/√N, where N represents the number of residual layers. The vocabulary size was significantly expanded (50,257 in GPT-2). The context size, aka the length of the input sequence, was increased (from 512 to 1024 tokens for GPT-2). 

Next innovation was the dataset. The model was trained on a diverse dataset of web pages called WebText, which was collected from various domains and contexts. 

In the fine-tuning phase, the model was trained on specific tasks like text completion, question-answering, and dialogue generation using labeled datasets. The model's parameters were adjusted to minimize the differences between its predicted outputs and the correct answers for those tasks. 

ChatGPT is paving the way for a future where knowledge creation is accelerated. Our paper highlights the remarkable adoption and expansion of ChatGPT across various domains.


REFERENCES

Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.N.; Kaiser, L.; Polosukhin, I. Attention is All You Need. In Proceedings of the Advances in Neural Information Processing Systems, Long Beach, CA, USA, 4–9 December 2017; pp. 6000–6010. Attention is All you Need (neurips.cc)

Radford, A.; Wu, J.; Child, R.; Luan, D.; Amodei, D.; Sutskever, I. Language Models Are Unsupervised Multitask Learners. OpenAI Blog. 2019. Available online: https://life-extension.github.io/2020/05/27/GPT%E6%8A%80%E6%9C%AF%E5%88%

Gabashvili I.S. The impact and applications of ChatGPT: a systematic review of literature reviews. Submitted on May 8, 2023. arXiv:2305.18086 [cs.CY]. https://doi.org/10.48550/arXiv.2305.18086

AI and the Future of Housing Development

The role of AI and advanced algorithms in urban development is rapidly expanding, bringing transformative changes to how communities are pla...