WoodWing Studio introduces Artificial Intelligence (AI) support to make knowledge work even more effective. This functionality helps users find the right information faster and make better use of the collective knowledge in their organization.
Because we understand that organizations are rightly critical about AI implementations, we have deliberately chosen a transparent, secure approach. All choices have been made based on the principle that customer data belongs to and remains with the customer, and that organizations maintain full control over when and how AI is deployed.
Below you will find answers to the most frequently asked questions about our AI functionality.
Activation and control
Is the AI functionality automatically activated?
No, the AI functionality is never automatically activated. Each organization determines completely independently whether and when features in Studio that make use of AI are used.
Technical architecture
Which AI model is used?
We use Amazon Bedrock as a platform, combined with various specialized AI models. For answering questions, we deploy Anthropic's Claude, one of the most advanced and reliable AI models available. This choice is based on proven reliability (Claude is known for accurate, factual answers), enterprise quality (specially developed for professional applications), security focus (built-in mechanisms against abuse and errors), and transparency (clear about limitations and uncertainties).
What impact does this have on our data processing agreement?
No impact whatsoever. This is a deliberate architectural choice where all AI processing remains within Amazon’s AWS platform where the Studio environment already runs. Data never leaves the trusted AWS infrastructure, no new external parties are involved in data processing, and existing security and privacy agreements remain unchanged.
What user data does the model use?
With AI Assistant, the AI model uses the article you have open to provide personalized, contextually relevant answers. This includes basic article components. Workflow context and metadata are excluded.
How is our data processed in the AI model?
Data remains completely within your own secure environment. All AI calculations take place within the AWS environment, data is not copied to external AI services, and processing happens in real-time without permanent storage elsewhere. Communication is secured with questions and answers remaining within the organization environment, no exchange with other Studio instances, and encrypted communication between all components. The system follows privacy by design principles with minimal data use for maximum results and automatic deletion of temporary processing data for transparency and control..
How are prompts stored?
Prompts (the questions asked to the AI) are stored in a backend service managed by the Studio Developers. These are limitedly visible to users.
Is the model trained with our data?
No, absolutely not. This is a fundamental principle of our AI implementation. Documents and information are never used to train the AI model, the underlying AI model remains unchanged through use, and intellectual property and trade secrets remain fully protected. There is strict separation of services where the AI model is a ready-made service that we use, we add no customer-specific training, and data remains local and is not merged with other data. This protects competitive advantage by keeping unique processes and knowledge private, there is no risk that your information benefits other organizations, and full control is maintained over intellectual property.
Legislation and compliance
How do you relate to the AI Act?
We take the AI Act very seriously and have proactively aligned our implementation with the expected requirements. This means proactive compliance through use of transparent, established AI models with extensive documentation, implementation of strict data security and privacy protection, and extensive risk analysis and impact assessments. We ensure transparency and control through complete openness about which AI technology is used, clear communication about capabilities and limitations, and maintaining final control with users over decisions. Continuous monitoring of AI Act developments, regular updates of compliance measures, and proactive adjustments when regulation becomes definitive ensure we remain prepared.
Have you conducted a DPIA/IAMA?
Yes, we have conducted extensive assessments prior to implementation. The Data Protection Impact Assessment (DPIA) contains a complete analysis of privacy risks and mitigation measures, assessment of data flows and processing purposes, and specific attention to AI-related privacy aspects. The Impact Assessment for Machine learning Applications (IAMA) includes extensive risk analysis of AI implementation, assessment of bias, fairness and transparency, and evaluation of potential societal impact.
Security and reliability
Why this specific technology choice?
Our choice for Amazon Bedrock and Anthropic's Claude is based on strict criteria for enterprise use. We chose proven technology where Amazon Bedrock is specially developed for enterprise AI applications, Anthropic's Claude has undergone extensive testing in professional environments, and we use no experimental technologies but established solutions. The security record includes extensive security audits and certifications, proven track record in sensitive sectors, and continuous security monitoring and updates. Transparency is ensured through open communication about capabilities and limitations, extensive available documentation, and clear accountability structures.
How do you ensure data sovereignty?
Data sovereignty is central to our architecture. Local processing means that all AI processing takes place within the AWS region. There is no data transfer to other geographical locations, and compliance with local legislation is guaranteed. Property rights remain intact because data remains completely owned by the organization, no licenses or rights are transferred to AI providers, and full control is maintained over use and deletion. Governance is ensured because organizations determine the use of AI functionality themselves, can stop or adjust at any time, and there are transparent contractual agreements about data processing.
What does this mean for our information security?
The AI functionality introduces no new security risks. The AI functionality follows exactly the same security protocols as the existing Studio environment, there is no weakening of existing security measures, and there is complete integration with current access controls and permissions. Additional security measures include extra logging of AI interactions for audit purposes, specific monitoring of AI-related activities, and additional encryption of AI communication. Risk mitigation is ensured through proactive identification and management of AI-specific risks, regular security assessments of AI components, and incident response procedures specific to AI-related events.
Contact
For additional questions about the AI functionality, contact our support team. We are ready to answer specific questions about implementation, security or compliance.
Comment
Do you have corrections or additional information about this article? Leave a comment! Do you have a question about what is described in this article? Please contact Support.
0 comments
Please sign in to leave a comment.